Revisiting Relative Indicators and Provisional Truths
@article{Leydesdorff2018RevisitingRI, title={Revisiting Relative Indicators and Provisional Truths}, author={Loet Leydesdorff and Tobias Opthof}, journal={ArXiv}, year={2018}, volume={abs/1808.09665} }
Following discussions in 2010 and 2011, scientometric evaluators have increasingly abandoned relative indicators in favor of comparing observed with expected citation ratios. The latter method provides parameters with error values allowing for the statistical testing of differences in citation scores. A further step would be to proceed to non-parametric statistics (e.g., the top-10%) given the extreme skewness (non-normality) of the citation distributions. In response to a plea for returning to…
2 Citations
How well does I3 perform for impact measurement compared to other bibliometric indicators? The convergent validity of several (field-normalized) indicators
- Computer ScienceScientometrics
- 2019
The results point out that the integrated impact indicator could be a valuable alternative to other indicators in bibliometrics, and shows that the PPtop 1% indicator discriminates best among different quality levels.
The Pinski–Narin influence weight and the Ramanujacharyulu power-weakness ratio indicators revisited
- Computer ScienceScientometrics
- 2019
This paper compares two size-independent dimensionless indicators: the Pinski–Narin influence weight (IW) and the Ramanujacharyulu power-weakness ratio (PWR) and shows that at the non-recursive level, the two indicators are identical.
References
SHOWING 1-10 OF 29 REFERENCES
Relative indicators and relational charts for comparative assessment of publication output and citation impact
- EducationScientometrics
- 2005
Cross-field comparison ofscientometric indicators1 is severely hindered by the differences in publication and citation habits of science fields. However, relating publication and citation indicators…
Caveats for the journal and field normalizations in the CWTS ("Leiden") evaluations of research performance
- PhysicsJ. Informetrics
- 2010
Towards a new crown indicator: an empirical analysis
- EconomicsScientometrics
- 2011
An empirical comparison between two normalization mechanisms for citation-based indicators of research performance aims to normalize citation counts for the field and the year in which a publication was published finds that at high aggregation levels, such as at the level of large research institutions or at thelevel of countries, the differences between the two mechanisms are very small.
Turning the tables on citation analysis one more time: Principles for comparing sets of documents
- EconomicsJ. Assoc. Inf. Sci. Technol.
- 2011
New citation impact indicators based not on arithmetic averages of citations but on percentile ranks are submitted, demonstrating that the proposed family indicators are an improvement on averages-based indicators because one can account for the shape of the distributions of citations over papers.
Measures for measures
- EconomicsNature
- 2006
Comparing commonly used measures of author quality, the mean number of citations per paper emerges as a better indicator than the more complex Hirsch index; a third method, the number of papers published per year, measures industry rather than ability.
Further steps towards an ideal method of measuring citation performance: The avoidance of citation (ratio) averages in field-normalization
- Computer ScienceJ. Informetrics
- 2011
Professional and citizen bibliometrics: complementarities and ambivalences in the development and use of indicators—a state-of-the-art report
- BusinessScientometrics
- 2016
An analytical clarification is proposed by listing an informed set of (sometimes unsolved) problems in bibliometrics which can shed light on the tension between simple but invalid indicators that are widely used (e.g., the h-index) and more sophisticatedicators that are not used or cannot be used in evaluation practices because they are not transparent for users, cannot be calculated, or are difficult to interpret.
Evaluation of some methods for the relative assessment of scientific publications
- Computer ScienceScientometrics
- 2005
A new bibliometric indicator, “relative subfield impact”, is introduced which compares the number of citations received by papers of a research unit to the average sub field impact factor.