Incidental or influential? - A decade of using text-mining for citation function classification


This work looks in depth at several studies that have attempted to automate the process of citation importance classification based on the publications’ full text. We o↵er a comparison of their individual similarities, strengths and weaknesses. We analyse a range of features that have been previously used in this task. Our experimental results confirm that the number of in-text references are highly predictive of influence. Contrary to the work of Valenzuela et al. (2015) [1], we find abstract similarity one of the most predictive features. Overall, we show that many of the features previously described in literature have been either reported as not particularly predictive, cannot be reproduced based on their existing descriptions or should not be used due to their reliance on external changing evidence. Additionally we find significant variance in the results provided by the PDF extraction tools used in the pre-processing stages of citation extraction. This has a direct and significant impact on the classification features that rely on this extraction process. Consequently, we discuss challenges and potential improvements in the classification pipeline, provide a critical review of the performance of individual features and address the importance of constructing a large scale gold-standard reference dataset.

Cite this paper

@inproceedings{Pride2017IncidentalOI, title={Incidental or influential? - A decade of using text-mining for citation function classification}, author={David Pride and Petr Knoth}, year={2017} }