Citation Statistics

  title={Citation Statistics},
  author={Robert Adler and John Ewing and Peter Taylor},
This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using “simple and objective” methods is increasingly prevalent today. The “simple and objective” methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence… 

Figures from this paper


The aim of this survey is to discuss an approach based on measure of "usefulness of scientific contribution" called "usc-index", published in (Markov et al, 2013) and grounded on theory of Knowledge Market.

The mismeasure of science: Citation analysis

Criticism of the use of citations for evaluation continues and update and undermines the desire to have an easy “scientific”—that is, quantitative—method of evaluation.

The use and misuse of journal metrics and other citation indicators

  • D. Pendlebury
  • Economics
    Archivum Immunologiae et Therapiae Experimentalis
  • 2009
The nature and use of the journal impact factor and other common bibliometric measures for assessing research in the sciences and social sciences based on data compiled by Thomson Reuters are reviewed to help government policymakers, university administrators, and individual researchers become better acquainted with the potential benefits and limitations of bibliometrics in the evaluation of research.

Statistical modelling of citation exchange between statistics journals

Analysis of the table of cross‐citations among a selection of statistics journals suggests that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized.

A simple model for citation curve

A simple equation is presented that can be used to derive closed-form expressions for various citation indices, analyze the effect of time and identify individual contribution to the Hirsch index for a group.

A Reverse Engineering Approach to the Suppression of Citation Biases Reveals Universal Properties of Citation Distributions

An exhaustive study of the citation patterns of millions of papers is performed, and it is found that a simple transformation of citation counts able to suppress the disproportionate citation counts among scientific domains is derived.

Analysis of bibliometric indicators for individual scholars in a large data set

This work analyzes the scientific profile of more than 30,000 researchers, and finds that the h-index of a scientist is strongly correlated with the number of citations that she/he has received so that theNumber of citations can be effectively be used as a proxy of theh-index.

Consistent bibliometric rankings of authors and of journals




Citation indexes for science; a new dimension in documentation through association of ideas.

The uncritical citation of disputed data by a writer, whether it be deliberate or not, is a serious matter and critical notes are increasingly likely to be overlooked with the passage of time.

What do citations count? the rhetoric-first model

This paper argues that the authors should think of citations first as rhetoric and second as reward, and some implications for quantitative modeling of the citation process are drawn.

Impact of data sources on citation counts and rankings of LIS faculty: Web of science versus scopus and google scholar

Results show that Scopus significantly alters the relative ranking of those scholars that appear in the middle of the rankings and that GS stands out in its coverage of conference proceedings as well as international, non-English language journals.

Why the impact factor of journals should not be used for evaluating research

Alternative methods for evaluating research are being sought, such as citation rates and journal impact factors, which seem to be quantitative and objective indicators directly related to published science.

Measures for measures

Comparing commonly used measures of author quality, the mean number of citations per paper emerges as a better indicator than the more complex Hirsch index; a third method, the number of papers published per year, measures industry rather than ability.

Eigenfactor Measuring the value and prestige of scholarly journals

In 1927, two chemists at Pomona College published an article in Science, proposing that librarians could use data about citation rates to select appropriate journals for a small library collection.

Generalized h-index for Disclosing Latent Facts in Citation Networks

Several inefficiencies of the h-index are demonstrated and a pair of generalizations and effective variants of it are developed to deal with scientist ranking and with publication forum ranking.

Effectiveness of Journal Ranking Schemes as a Tool for Locating Information

This work systematically evaluates the effectiveness of journals, through the work of editors and reviewers, at evaluating unpublished research, and develops a model for the asymptotic number of citations accrued by papers published in a journal that closely matches the data.

Universal Behavior of a Research Productivity Index

Indexes that account for good representations of an individual's productivity are theme of major importance for the evaluation and comparison among researchers. Recently, a new index was proposed

Why are the impacts of the leading medical journals so similar and yet so different? Item-by-item audits reveal a diversity of editorial material

Using the 1981 and 1982 Science Citation Index@ (SCP) and other citation data, the top 5 of 78 general and internal medicine journals were examined in detail and anicle-by-article audit enabled to differentiate the impact of different types of editorial material.