Journal List > J Korean Med Sci > v.33(18) > 1107794

Bornmann and Haunschild: Measuring Individual Performance with Comprehensive Bibliometric Reports as an Alternative to h-Index Values

Abstract

The h-index is frequently used to measure the performance of single scientists in Korea (and beyond). No single indicator alone, however, is able to provide a stable and complete assessment of performance. The Stata command bibrep.ado is introduced which automatically produces bibliometric reports for single researchers (senior researchers working in the natural or life sciences). The user of the command receives a comprehensive bibliometric report which can be used in research evaluation instead of the h-index.

INTRODUCTION

Evaluation is a sign of nearly all higher education systems worldwide. An overview of evaluations in the Korean higher education system can be found in Kim.1 These evaluations are mainly based on peer review — in Korea and beyond. Since the late 1980s, however, bibliometric indicators have been used for research evaluation purposes on a larger scale.2 These indicators stand for latent dimensions of less readily measurable variables (such as productivity of researchers and quality of research). The indicators are presumed to be associated with the variables without directly measuring them.3 One important reason for using bibliometrics is — according to Adams et al.4 — that “direct assessment of research activity needs expert judgment, which is costly and erroneous, so proxy indicators based on metadata around research inputs and outputs are widely used” (p. 2). However, bibliometric indicators are not only seen as instruments which might replace the peer review process, but also as instruments providing useful information to support peer review in decision making.56 If bibliometrics is used in peer review, this is called informed peer review. In all application scenarios of bibliometrics, reports including explanations of the indicators used and interpretations of the results are necessary.
There are currently numerous options to comprehensively evaluate individual researchers' performance. A recent paper highlights some of the new platforms where research output can be visualized and evaluated.7 For example, ResearchGate (see researchgate.net), ORCID (see orcid.org), and Publons (see publons.com) are currently universally applicable platforms for judging and crediting researchers based on their published items.
In recent decades, several indicators have been developed which can be used in research evaluation processes. An overview of the indicators is given by Mingers and Leydesdorff8 and Waltman.9 Although a diverse set of indicators has hitherto been developed to measure the performance of single researchers — Wildgaard et al.10 identified 108 indicators11 — there is the tendency in research evaluation to use only single indicators, such as the h-index12 or the Journal Impact Factor,13 which are readily available. However, there is no agreement in the bibliometric community and beyond that there is a single best indicator14 which should always be used, nor has bibliometric research demonstrated that a single best indicator exists. In fact, bibliometric research has indicated that a single best indicator does not exist. Each of the different indicators provides a different perspective on the research impact of an evaluated unit (e.g., researcher). For example, the h-index has been criticized for its lack of age- and field-normalization15 and its use of an arbitrary threshold for identifying the most important publications in a set.16 Criticism of the use of the Journal Impact Factor for research evaluation purposes is manifold in the bibliometric literature.17

CONTEXTUALIZED BIBLIOMETRICS

No single indicator alone, however, is able to provide a stable and complete assessment of performance as outlined by Hammarfelt and Rushforth18: “The h-index could be seen as an attempt to summarize a whole career in one single measure, and in some reports, the h-index is represented as an almost magical number that can be used to characterize and grade a researcher” (p. 175). Instead, Lewison et al.19 propose to use a range of indicators in a carefully selected array. Furthermore, Waltman and van Eck5 recommend not only the use of a diverse set of indicators, but also the contextualization of the results of the bibliometric investigations: “scientometric indicators should be complemented with contextual information. The scientometric context of an indicator consists of all scientometric information that can be relevant in the interpretation of the indicator. When indicators are made available, their scientometric context should also be made available as much as possible” (p. 544).
For example, important information for the interpretation of a citation-based indicator is the list of publications and each publication's citation impact. The most highly cited publications are especially important in this context, since they determine the citation impact performance of a researcher. The interpretation of a researcher's citation impact should also consider the number of co-authors a researcher has published with. The more co-authors involved in the publication record of a researcher, the less contribution the researcher had.
Similar to the proposal of contextualization of bibliometric results by Waltman and van Eck,5 Gunashekar et al.20 recommended the following for reviewer panels for major funding schemes: “along with the bibliometrics data, the panel could be provided with a concise (maximum 2 pages) ‘quick reference guide’ focusing on how to interpret the bibliometric analysis (e.g., explanations of the key bibliometric indicators of impact, fundamental points related to the normalisation process, and the comparability of results across the applicants)” (p. 1831).
Taken together, it is a sign of good bibliometric analyses (and statistical analyses in general) that the recipient receives more than a single number.212223 This Brief Communication is intended to point to a new feature in the Stata® software,24 which can be used to facilitate contextualized and diverse bibliometrics (similar features are available in other statistical programs, such as R with the packages R2wd or rmarkdown). The new command putdocx lets the user create Word (.docx) files including statistical results (see www.stata.com/new-in-stata/create-word-documents/). Thus, Word files can be created with formatted paragraphs and embedded output from Stata in paragraphs, tables, and graphs. The new Stata command can be used in combination with other commands from Stata (e.g., for calculating mean citations and co-authorship networks) to generate a comprehensive bibliometric report including several results based on different indicators. The sequence of Stata commands for generating the contextualized report can be saved, shared, or improved.

THE NEW STATA COMMAND BIBREP.ADO

In the SSC Archive, the interested reader finds the new Stata command bibrep.ado including such a sequence of other Stata commands, especially putdocx. The command can be used to produce a bibliometric report for a single researcher (senior researcher working in the natural or life sciences). Additionally, a help file exists explaining the use of bibrep.ado and the necessary data. The command needs the following bibliometric data on the paper level as input variables for generating the report: 1) publication year; 2) document type; 3) name of first author; 4) number of co-authors; 5) journal title; 6) title of paper; 7) volume; 8) issue; 9) first page; 10) country codes of authors' affiliations; 11) number of countries with which the authors are affiliated; 12) citation counts; 13) paper percentile25; and 14) journal percentile.26 The indicators and the corresponding results for a researcher are explained in the report itself. An additional document is produced including the publication list of the researcher with citation impact information. Interested readers can use the command for their own research evaluation purposes. For example, researchers could use the command for producing bibliometric reports in the context of informed grant or fellowship peer review processes. Peer review panels awarding prices could study the publication records of the candidates.
Using the user-written command bibrep.ado, bibliometricians and other users of bibliometric data can generate results on single researchers about as fast as calculating single indicators, such as the h-index. Thus, contextualized bibliometrics based on a diverse set of bibliometric indicators can be produced in a similarly short time as required for the calculation of single indicator values — provided the data for producing the report is readily available. Possible users of the command should be aware of the fact that they need access to bibliometric tools, such as SciVal (Elsevier, see scival.com) or InCites (Clarivate Analytics, see clarivate.com/products/incites). The necessary percentile data for a researcher's citation impact analysis are currently not available in Scopus (Elsevier) or Web of Science (Clarivate Analytics). This is a (current) limitation of using bibrep.ado instead of h-index values to assess the performance of single researchers.
Readers with recommendations for improvements of bibrep.ado are invited to contact the authors of this Brief Communication.

Notes

Disclosure The authors have no potential conflicts of interest to disclose.

Author Contributions

  • Writing - original draft: Bornmann L, Haunschild R.

References

1. Kim T. The Evaluation of the Higher Education System in the Republic of Korea. In : Cowen R, editor. The World Yearbook of Education 1996: the Evaluation of Higher Education Systems. London, United Kingdom: Kogan Page;1996. p. 113–125.
2. Roemer RC, Borchardt R. Meaningful Metrics: a 21st Century Librarian's Guide to Bibliometrics, Altmetrics, and Research Impact. Chicago, IL: Association of College and Research Libraries;2015.
3. Wilsdon J, Allen L, Belfiore E, Campbell P, Curry S, Hill S, et al. The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Bristol, United Kingdom: Higher Education Funding Council for England (HEFCE);2015.
4. Adams J, Loach T, Szomszor M. Interdisciplinary Research: Methodologies for Identification and Assessment. London, United Kingdom: Digital Science;2016.
5. Waltman L, van Eck NJ. In : Ràfols I, Molas-Gallart J, Castro-Martínez E, Woolley R, editors. The need for contextualized scientometric analysis: an opinion paper. Proceedings of the 21 ST International Conference on Science and Technology Indicator; València, Spain: Universitat Politècnica de València;2016. p. 541–549.
6. Bornmann L. Scientific peer review. Annu Rev Inform Sci Tech. 2011; 45(1):197–245.
crossref
7. Gasparyan AY, Nurmashev B, Yessirkepov M, Endovitskiy DA, Voronov AA, Kitas GD. Researcher and author profiles: opportunities, advantages, and limitations. J Korean Med Sci. 2017; 32(11):1749–1756.
crossref
8. Mingers J, Leydesdorff L. A review of theory and practice in scientometrics. Eur J Oper Res. 2015; 246(1):1–19.
crossref
9. Waltman L. A review of the literature on citation impact indicators. J Informetrics. 2016; 10(2):365–391.
crossref
10. Wildgaard L, Schneider JW, Larsen B. A review of the characteristics of 108 author-level bibliometric indicators. Scientometrics. 2014; 101(1):125–158.
crossref
11. Bornmann L, Marx W. How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics. 2014; 98(1):487–509.
crossref
12. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci USA. 2005; 102(46):16569–16572.
crossref
13. Garfield E. Citation analysis as a tool in journal evaluation: journals can be ranked by frequency and impact of citations for science policy studies. Science. 1972; 178(4060):471–479.
crossref
14. Bartolucci F, Dardanoni V, Peracchi F. Ranking scientific journals via latent class models for polytomous item response data. J R Stat Soc Ser A Stat Soc. 2015; 178(4):1025–1049.
crossref
15. Bornmann L, Daniel HD. What do we know about the h index? J Assoc Inf Sci Technol. 2007; 58(9):1381–1385.
16. Waltman L, van Eck NJ. The inconsistency of the h-index. J Assoc Inf Sci Technol. 2012; 63(2):406–415.
17. Bornmann L, Marx W, Gasparyan AY, Kitas GD. Diversity, value and limitations of the journal impact factor and alternative metrics. Rheumatol Int. 2012; 32(7):1861–1867.
crossref
18. Hammarfelt B, Rushforth AD. Indicators as judgment devices: an empirical study of citizen bibliometrics in research evaluation. Res Eval. 2017; 26(3):169–180.
crossref
19. Lewison G, Thornicroft G, Szmukler G, Tansella M. Fair assessment of the merits of psychiatric research. Br J Psychiatry. 2007; 190:314–318.
crossref
20. Gunashekar S, Wooding S, Guthrie S. How do NIHR peer review panels use bibliometric information to support their decisions? Scientometrics. 2017; 112(3):1813–1835.
crossref
21. Best J. Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists. Berkeley, CA: University of California Press;2012.
22. Hammarfelt B, de Rijcke S, Wouters P. From eminent men to excellent universities: university rankings as calculative devices. Minerva. 2017; 55(4):391–411.
crossref
23. Leydesdorff L, Wouters P, Bornmann L. Professional and citizen bibliometrics: complementarities and ambivalences in the development and use of indicators—a state-of-the-art report. Scientometrics. 2016; 109(3):2129–2150.
crossref
24. StataCorp. Statistical Software: Release 15. College Station, TX: Stata Corporation;2017.
25. Bornmann L, Leydesdorff L, Mutz R. The use of percentiles and percentile rank classes in the analysis of bibliometric data: opportunities and limits. J Informetrics. 2013; 7(1):158–165.
crossref
26. Pudovkin AI, Garfield E. In : Bryans JB, editor. Rank-normalized impact factor: a way to compare journal performance across subject categories. ASIST 2004: Proceedings of the 67th Asis&T Annual Meeting, Vol 41, 2004: Managing and Enhancing Information: Cultures and Conflicts; Medford, NJ: Information Today Inc.;2004. p. 507–515.
TOOLS
ORCID iDs

Lutz Bornmann
https://orcid.org/0000-0003-0810-7091

Robin Haunschild
https://orcid.org/0000-0001-7025-7256

Similar articles