Turning the tables on citation analysis one more time: Principles for comparing sets of documents

@article{Leydesdorff2011TurningTT,
  title={Turning the tables on citation analysis one more time: Principles for comparing sets of documents},
  author={Loet Leydesdorff and Lutz Bornmann and R{\"u}diger Mutz and Tobias Opthof},
  journal={J. Assoc. Inf. Sci. Technol.},
  year={2011},
  volume={62},
  pages={1370-1381}
}
We submit newly developed citation impact indicators based not on arithmetic averages of citations but on percentile ranks. Citation distributions are—as a rule—highly skewed and should not be arithmetically averaged. With percentile ranks, the citation score of each paper is rated in terms of its percentile in the citation distribution. The percentile ranks approach allows for the formulation of a more abstract indicator scheme that can be used to organize and/or schematize different impact… 

Figures and Tables from this paper

The normalization of citation counts based on classification systems

TLDR
This study describes an ideal solution for the normalization of citation impact: the reference set for the publication in question is collated by means of a classification scheme, where every publication is associated with a single principal research field or subfield entry and a publication year.

How to analyze percentile citation impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes, and top-cited papers

TLDR
Suggestions take into account the distribution of percentiles over the publications in the sets and concentrate on the range of publications with the highest citation impact, the range that is usually of most interest in the evaluation of scientific performance.

Assigning publications to multiple subject categories for bibliometric analysis: An empirical case study based on percentiles

TLDR
This study would like to look at whether the calculation of differences between the citation impact of research institutions is affected by whether the minimum, the maximum, the mean, the median or the median impact for the different subject categories is used.

How to analyse percentile impact data meaningfully in bibliometrics: The statistical analysis of distributions, percentile rank classes and top-cited papers

TLDR
This study suggests how percentiles can be analysed meaningfully for an evaluation study and focuses on the range of publications with the highest citation impact - that is, the range which is usually of most interest in the evaluation of scientific performance.

Inconsistencies of recently proposed citation impact indicators and how to avoid them

  • M. Schreiber
  • Computer Science
    J. Assoc. Inf. Sci. Technol.
  • 2012
TLDR
It is shown that under certain circumstances in particular for small data sets, the recently proposed citation impact indicators I3(6PR) and R(6,k) behave inconsistently when additional papers or citations are taken into consideration, and a different way of assigning weights avoids these problems.

The problem of percentile rank scores used with small reference sets

  • L. Bornmann
  • Computer Science
    J. Assoc. Inf. Sci. Technol.
  • 2013
Dear Sir, Instead of a relative mean citation rate, a percentile rank score (PRS) can be used in bibliometrics to generate a normalized citation impact for a paper. The use of a PRS is very

A Reverse Engineering Approach to the Suppression of Citation Biases Reveals Universal Properties of Citation Distributions

TLDR
An exhaustive study of the citation patterns of millions of papers is performed, and it is found that a simple transformation of citation counts able to suppress the disproportionate citation counts among scientific domains is derived.

Universality of performance indicators based on citation and reference counts

TLDR
This work demonstrates that comparisons can be made between publications from different disciplines and publication dates, regardless of their citation count and without expensive access to the whole world-wide citation graph.
...

References

SHOWING 1-10 OF 74 REFERENCES

Reference standards for citation based assessments

TLDR
The pros and cons of the three possible choice of reference standards for citation assessments are discussed, and the set of journals cited by the journal in question seems to be a useful basis to compare with.

Citation counts for research evaluation: standards of good practice for analyzing bibliometric data and presenting and interpreting results

TLDR
Standards of good practice for analyzing bibliometric data and presenting and interpreting the results are presented.

Universality of citation distributions–A validation of Radicchi et al.'s relative indicator c f = c-c 0 at the micro level using data from chemistry

TLDR
Results indicate that z-scores are better suited than cf values to produce universality of discipline-specific citation distributions.

Is citation analysis a legitimate evaluation tool?

TLDR
It is concluded that as the scientific enterprise becomes larger and more complex, and its role in society more critical, it will become more difficult, expensive and necessary to evaluate and identify the largest contributors.

Content-based and algorithmic classifications of journals: Perspectives on the dynamics of scientific communication and indexer effects

TLDR
This study test the results of two recently available algorithms for the decomposition of large matrices against two content-based classifications of journals: the ISI Subject Categories and the field-subfield classification of Glanzel and Schubert (2003).

Towards a new crown indicator: an empirical analysis

TLDR
An empirical comparison between two normalization mechanisms for citation-based indicators of research performance aims to normalize citation counts for the field and the year in which a publication was published finds that at high aggregation levels, such as at the level of large research institutions or at thelevel of countries, the differences between the two mechanisms are very small.

Caveats for the Use of Citation Indicators in Research and Journal Evaluations

TLDR
The assumption that citation and publication practices are homogenous within specialties and fields of science is invalid and the delineation of fields and among specialties is fuzzy.

Citation analysis cannot legitimate the strategic selection of excellence

TLDR
This correspondence uses previously published data of Van Raan (2006) to address the pivotal issue of how the results of citation analysis correlate with the Results of peer review, which was neither significantly correlated with the two parameters developed by the CWTS in the past nor with the more recently proposed h-index.

Universality of citation distributions: Toward an objective measure of scientific impact

TLDR
It is shown that the probability that an article is cited c times has large variations between different disciplines, but all distributions are rescaled on a universal curve when the relative indicator cf = c/c0 is considered, where c0 is the average number of citations per article for the discipline.
...