Artifact Evaluation: Is It a Real Incentive?

@article{Childers2017ArtifactEI,
  title={Artifact Evaluation: Is It a Real Incentive?},
  author={Bruce R. Childers and Panos K. Chrysanthis},
  journal={2017 IEEE 13th International Conference on e-Science (e-Science)},
  year={2017},
  pages={488-489}
}
It is well accepted that we learn hard lessons when implementing and re-evaluating systems, yet it is also acknowledged that science faces a crisis in reproducibility. Experimental computer science is far from immune, although it should be easier for CS than other sciences, given the emphasis on experimental artifacts, such as source code, data sets, workflows, parameters, etc. The data management community pioneered methods at ACM SIGMOD 2007 and 2008 to encourage and incentivize authors to… 

Figures and Tables from this paper

Artifact Evaluation: FAD or Real News?
TLDR
Data Management (DM), like many areas of computer science (CS), relies on empirical evaluation that uses software, data sets and benchmarks to evaluate new ideas and compare with past innovation, but few researchers make these artifacts available in a findable, accessible, interoperable and reusable manner.
Community expectations for research artifacts and evaluation processes
TLDR
While it is found that some expectations exceed the ones expressed in calls and reviewing guidelines, there is no consensus on quality thresholds for artifacts in general and several actionable suggestions are derived which can help to mature artifact evaluation in the inspected community and also to aid its introduction into other communities in computer science.
Understanding and Improving Artifact Sharing in Software Engineering Research
TLDR
A mixed-methods study including a publication analysis and online survey of 153 software engineering researchers identifies several high-level challenges that affect the quality of artifacts including mismatched expectations between these groups, and a lack of sufficient reward for both creators and reviewers.
From FAIR research data toward FAIR and open research software
TLDR
This work reviews and analyzes the current state in this area in order to give recommendations for making research software FAIR and open.
Research artifacts and citations in computer systems papers
TLDR
The main finding is that papers with shared artifacts averaged approximately 75% more citations than papers with none, and the release of an artifact appears to increase the citations of a systems paper by some 34%.
Implementing FAIR Data Infrastructures
TLDR
This manifesto reports the findings from the Dagstuhl Perspectives Workshop and provides recommendations along two lines: how computer science can contribute to implementing FAIR data infrastructures and how to make computer science research itself more FAIR.
FAIR and Open Computer Science Research Software
TLDR
Recommendations for making computer science research software FAIR and open are given to observe that research software publishing practices in computer science and in computational science show significant differences.
Streamlining the Inclusion of Computer Experiments In a Research Paper
TLDR
The efforts to create an integrated toolchain for running, processing, and including the results of computer experiments in scientific publications are reported on.

References

SHOWING 1-5 OF 5 REFERENCES
FlowDroid: precise context, flow, field, object-sensitive and lifecycle-aware taint analysis for Android apps
TLDR
FlowDroid is presented, a novel and highly precise static taint analysis for Android applications that successfully finds leaks in a subset of 500 apps from Google Play and about 1,000 malware apps from the VirusShare project.
The real software crisis
TLDR
Sharing experiences running artifact evaluation committees for five major conferences and finding out what works and what doesn't helps to improve the quality of artifact evaluation at conferences.
Repeatability in computer systems research
TLDR
To encourage repeatable research, fund repeatability engineering and reward commitments to sharing research artifacts to encourage shareable research.
1,500 scientists lift the lid on reproducibility
Precise context , flow , field , objectsensitive and lifecycleaware taint analysis for android apps
  • Proceedings of the 35 th ACM SIGPLAN Conference on Programming Language Design and Implementation , PLDI ’
  • 2014