Measures for measures

@article{Lehmann2006MeasuresFM,
  title={Measures for measures},
  author={Sune Lehmann and A. D. Jackson and Benny Lautrup},
  journal={Nature},
  year={2006},
  volume={444},
  pages={1003-1004}
}
Are some ways of measuring scientific quality better than others? Sune Lehmann, Andrew D. Jackson and Benny E. Lautrup analyse the reliability of commonly used methods for comparing citation records.Statistical noiseCitation analysis can loom large in a scientist's career. In this issue Sune Lehmann, Andrew Jackson and Benny Lautrup compare commonly used measures of author quality. The mean number of citations per paper emerges as a better indicator than the more complex Hirsch index; a third… 
A measure for the impact of research
TLDR
A measure that aims at quantifying the impact of research de-emphasizing productivity, thus providing scientists an alternative, conceivably fairer, evaluation of their work.
The h-index is no longer an effective correlate of scientific reputation
TLDR
A large-scale study of scientometric measures, analyzing millions of articles and hundreds of millions of citations across four scientific fields and two data platforms, finds that the correlation of the h-index with awards that indicate recognition by the scientific community has substantially declined.
Quantifying Long-Term Scientific Impact
TLDR
A mechanistic model is derived for the citation dynamics of individual papers, allowing us to collapse the citation histories of papers from different journals and disciplines into a single curve, indicating that all papers tend to follow the same universal temporal pattern.
A Measure of Total Research Impact Independent of Time and Discipline
TLDR
The proposed measures of research impact, tori and riq, have been implemented in the Smithsonian/NASA Astrophysics Data System and it is demonstrated that these measures are substantially less vulnerable to temporal debasement and cross-disciplinary bias than the most popular current measures.
A Measure of Research Taste
TLDR
The presented measure, CAP, balances the impact of publications and their quantity, thus incentivizing researchers to consider whether a publication is a useful addition to the literature, and is simple, interpretable, and parameter-free.
Using choquet integrals for evaluating citation indices in journal ranking
TLDR
This work interprets the obtained fuzzy measures of many new datasets toward understanding the importance of these newly published indices and how indicative they may be of a journal's quality.
Hirsch index rankings require scaling and higher moment
TLDR
This work proposes appropriate scaling of the h-index based on its probability distribution that is calculated for any underlying citation distribution that outperforms existing index estimation models that have focused on the expected value only (i.e., first moment).
Escape from the impact factor
As Editor-in-Chief of the journal Nature, I am concerned by the tendency within acade- mic administrations to focus on a journal's impact factor when judging the worth of scientific contri- butions
Twenty Hirsch index variants and other indicators giving more or less preference to highly cited papers
TLDR
I analyse the citation records of 26 physicists discussing various suggestions of Hirsch index, quantifying which indices and indicators yield similar and which yield more deviating rankings of the 26 datasets.
Venue Analytics: A Simple Alternative to Citation-Based Metrics
  • L. Keselman
  • Computer Science
    2019 ACM/IEEE Joint Conference on Digital Libraries (JCDL)
  • 2019
TLDR
It is shown that using venue scores to evaluate both authors and institutions produces quantitative measures that are comparable to approaches using citations or peer assessment, in contrast to many other existing evaluation metrics.
...
...

References

SHOWING 1-4 OF 4 REFERENCES
Causal relationship between article citedness and journal impact
TLDR
The relationship between article citedness and journal impact was investigated on the basis of complete publication lists provided by 16 senior scientists from a major Norwegian biomedical research institute, and it was possible to observe a twofold ratio in citedness between the two groups throughout the journal impact range.
Citation networks in high energy physics.
TLDR
The citation network constituted by the SPIRES database is investigated empirically and a consideration of citation distribution by subfield shows that the citation patterns of high energy physics form a remarkably homogeneous network.
Statistical properties of bibliometric indicators: Research group indicator distributions and correlations
  • A. Raan
  • Economics
    J. Assoc. Inf. Sci. Technol.
  • 2006
TLDR
An empirical approach to the study of the statistical properties of bibliometric indicators on a very relevant but not simply ‘available’ aggregation level: the research group finds that at the level of research groups the distribution functions of the main indicators, particularly the journal- normalized and the field-normalized indicators are approaching normal distributions.
Supplementary information accompanies this Commentary on Nature’s website
  • The Niels Bohr Institute
  • 2003