Abstract
In the era of digitization and Open Access, article-level metrics are increasingly employed to distinguish influential research works and adjust research management strategies. Tagging individual articles with digital object identifiers allows exposing them to numerous channels of scholarly communication and quantifying related activities. The aim of this article was to overview currently available article-level metrics and highlight their advantages and limitations. Article views and downloads, citations, and social media metrics are increasingly employed by publishers to move away from the dominance and inappropriate use of journal metrics. Quantitative article metrics are complementary to one another and often require qualitative expert evaluations. Expert evaluations may help to avoid manipulations with indiscriminate social media activities that artificially boost altmetrics. Values of article metrics should be interpreted in view of confounders such as patterns of citation and social media activities across countries and academic disciplines.
Currently available journal and author metrics do not reveal scientific merit and influence of individual articles. Some of these metrics, particularly the notorious journal impact factor (JIF), have major drawbacks which limit their use for research evaluation and academic promotion.12 In the era of digitization and mass proliferation of online resources, article-level metrics are increasingly proposed and employed to evaluate the dissemination of scientific information and research impact.34 While traditional metrics such as citations are slow to reveal scholarly use of articles and their overall implications, several alternative metrics have emerged to reveal ‘attractive’ articles in a timely manner and aid in adjusting research and journal publishing strategies.56 All metrics are dependent on journal digitization and promotion which expand the readership network. Use of an individual article can be increased by professional promotion through widely visible online platforms, information aggregators, digital libraries, and bibliographic databases.7
Academic disciplines and countries with low citation metrics may particularly benefit from complementing article citations with alternative metrics and adjusting their research and development strategies accordingly. As an example, quantifying article citations along with online attention scores and reuse in the social sciences, humanities and nursing yields more objective evaluation of research implications in these traditionally low-cited academic areas.89
Current online technologies offer equal opportunities for improving visibility and promotion of individual articles. The assignment of digital object identifiers (DOI) is one such universally applicable technology offering benefits to all stakeholders of science communication.10 The DOI assignment positively influences metrics, which are generated by Altmetric.com and Plum Analytics, the two information giants that aggregate attention surrounding digitally accessible articles.1112 Notably, an analysis of 496,665 PubMed-indexed articles demonstrated that a high percentage of articles (40.5%) published in the last 50 years lack DOI.13 The same study pointed to a much better situation with DOI assignment in the U.S., the U.K. and the Netherlands than in Russia, the Czech Republic and Romania.
The expanded use of online access and evaluation tools brings about a major change in the selection, categorization, and reuse of scholarly articles.14 Global bibliographic databases and online platforms such as Scopus and PubMed are now equipped with digital tools to capture citations and overall attention to individual articles. Researchers and authors may distinguish ‘trending’ articles and ‘hot’ topics in their fields by ranking articles with linked citations and overall online attention records.15
The move toward article-level metrics is well accepted by experts who recognize academic value of citation alternatives, particularly bookmarking counts at the Mendeley reference management platform.16 Mendeley metrics are employed for article impact evaluation similar to citations, keeping in mind that bookmarking counts accumulate earlier and that not all of them result in eventual citations.17
Considering the wide variety of currently discussed established and emerging metrics, we aimed to comprehensively overview article-level metrics and their potential implications for upgrading research evaluation and editorial strategies.
Article access data such as HTML views and PDF and XML downloads are basic quantitative metrics that reflect readers' interest toward certain topics. Most online journal platforms are now equipped with tools for quantifying real-time usage metrics.18 These metrics can be monitored over time to visualize temporal and geographic trends of article usage.1920 Variations in use of HTML, PDF, and XML article formats can be also reported to guide publishers over the readership preferences and priorities for reuse, archiving, and marketing.19
Views and downloads are important for non-indexed start-up journals that may change their online interface in line with their readership preferences and online activities. For indexed journals, these basic metrics aid in revealing influential articles in a short period (weeks, months) while expecting citations.21 A comparative analysis of the access to the same set of journal articles through Web of Science and Springer online interface revealed the user preference of quick and free access facilitated by the publisher website and common search engine.22
Interestingly, article characteristics such as title length, number of authors, and keywords do not confound download counts. Advanced publisher interface has more influence on article usage metrics.23 Article views are increasingly dependent on publisher access and distribution preferences. Publishers that tag their articles with DOI may set various online navigation routes through journal platforms and indexing databases. They may direct visits to HTML links before downloading PDFs. The availability of various access routes, particularly for articles archived by PubMed Central and other digital repositories, skews the usage statistics, making journal website visits a less reliable metric for journals with expanded archiving.24
Publisher and journal integration with social media sites significantly increases journal webpage visits and article downloads, particularly within the first days of new content publication.25 Twitter and Facebook are now instrumental for directing public interest toward freely accessible articles and increasing their usage.26
Usage metrics displayed at journal websites and other distribution platforms variably correlate with citation metrics.27 Top-downloaded articles are usually those from high-impact journals.28 Variable strength of such associations is also reported across academic disciplines, with a strong correlation between downloads and citations in the social sciences and humanities.29 A moderate correlation between usage metrics in the first year and citations in the two years post-publication was reported at a multidisciplinary open-access mega journal Scientific Reports.30
Raw citation counts have long been used for quantifying and comparing the impact of scholarly articles. In the absence of other reliable tools, citations have been viewed as the only indicators of article utility and “quality.”31 Relevant citations indicate that index items are read and critically appraised by citing authors.32 Currently, online platforms of numerous journals display article citations recorded by Scopus, Web of Science, Google Scholar, and Dimensions. The counts may differ due to variable volumes of sources covered by each database and search engine, and particularly due to the high selectivity of Web of Science and Scopus.33 The ease of tracking citations at Web of Science and Scopus has allowed distinguishing most-cited articles, visualizing trending topics, and recording citation classics across subject categories. Related analyses often focus on 100 highly-cited items published within several decades. Such analyses allow visualizing citation networks and setting research priorities. High citation counts often reflect the approval of evidence-based research by professionals.3435 Regardless of their numbers, certain citations add to the scientific prestige of cited articles. These are primarily citations from clinical trial reports, practice guidelines, policy documents, and systematic reviews.21 As a rule, reference lists of all these influential articles are thoroughly checked in view of their relevance, validity, and scientific prestige. Importantly, citing and referencing patterns have changed over the past two decades.32 The digitization, indexing, and language of individual articles are now the driving forces behind the citation counts.36 The availability of online multidisciplinary databases with advanced search engines and comprehensive literature coverage facilitates the retrieval and subsequent citation of articles. In fact, an analysis of referencing and citing characteristics of nearly 1 million articles demonstrated that expanded and updated reference lists with high share of publications recorded in Web of Science positively correlate with citation impact.37 The interdisciplinary nature of current research and authors' and readers' expanded professional networks are believed to confound modern-day citation impact.38 Although citations are more reliable metrics than views and downloads, these are still far from being optimal indicators of article use and influence. There are numerous subjective factors and variable editorial strategies which affect citation counts. Access to bibliographic databases and search engines differ across countries, resulting in limited citation activities in disadvantaged regions. Also, numerous journal instructions contain points on how their authors should limit or expand reference lists. Reviewers of highly prestigious journals request their authors to replace or omit references to non-Anglophone articles, even if these are relevant and better explore the context.39 All these subjective factors may undermine the reliability of citation metrics.40 Raw citation counts are highly dependent on academic fields, necessitating field normalization by correcting for a number of related (average) references (citation score normalized by cited references) or citations (mean normalised citation score).4142 The widely discussed Leiden Manifesto stressed the importance of the field normalization and endorsed the percentiles method for quantifying an article's impact in its field.43 The percentiles method estimates the article rank on the basis of its listing among 1%, 10%, or 20% of highly-cited items in a certain well-defined and ‘homogenous’ academic field. The experiments with various normalization approaches have distinguished the percentile citation scores as less field- and time-biased than the scores normalized for mean (average) citations in a field.44 The same experiments have pointed to the time-normalization as an approach for revealing influential articles in an academic field with numerous citations within a short post-publication period.
The relative citation ratio (RCR) is now a widely discussed article-level citation metric proposed by experts at the U.S. National Institutes of Health (NIH) Office of Portfolio Analysis.45 This is a non-proprietary metric that is field and time-adjusted and benchmarked to NIH-funded articles. Its algorithm is based on a network of PubMed citations which are tracked by the NIH iCite analytic web tool (https://icite.od.nih.gov/). The nominator of the RCR is the total annual citations of an article while the denominator is the average annual citations received by NIH-funded articles in the same field. An RCR equal to one represents the field normalization and any values above one point to better performance of an index article compared to 50% NIH-funded articles.46 The main advantage of RCR is that it ignores journal impact indicators. Two large biomedical funding agencies, the Welcome Trust in the U.K. and Fondazione Telethon in Italy have already experimented with the RCR for analyzing their research grant outcomes.47 Several bibliometric analyses have found correlation between the RCR and established field-normalized citation indicators such as mean normalized citation score, citation percentile, and F1000 score.4849 Nonetheless, the RCR has an important limitation due to ignoring citations outside the PubMed platform, particularly those in higher citation fields which may decrease RCR values.5051
F1000Prime (https://f1000.com/prime/home) is a subscription-based expert evaluation system for articles in biology and medicine. It is a post-publication tool of Faculty 1000 (a U.K.-based publisher) that was launched in 2002 to help biomedical researchers find the most significant and potentially impactful articles in their field.52 The articles in the system are rated by faculty members from all over the world as exceptional (3 stars), very good (2) and good (1). Total scores are sums of the stars received from all recommending experts. The evaluated articles are additionally tagged with remarks such as “good for teaching,” “interesting hypothesis,” “new finding.” The experts may add brief comments and highlight potential implications.53 The top three areas with most recommendations include cell biology, molecular medicine, and genetics.53 Although the expert evaluation system is a sustainable alternative to citation-based ranking and social-media commenting, there are some limitations related to its subscription model, closed circle of peer evaluators, and focus on “high-quality” and well-cited biomedical items.5455
Alternative article metrics, or altmetrics, offered by Altmetric.com (https://www.altmetric.com/) and Plum Analytics (https://plumanalytics.com/) are increasingly used for the assessment of the broad academic and societal impact of scholarly works.56 The resultant metrics of the same articles generated by different platforms may differ due to varying tracking patterns. Articles in some disciplines, particularly in the social sciences, humanities, and biomedicine, with a broad public engagement in online sharing and commenting, may garner high values of altmetrics.57 The main advantages of the altmetrics are their early and real-time reflections of individual article uses and societal implications.
Altmetric.com services are currently utilized by most large publishers for monitoring their trending articles with high values of the altmetric attention score (AAS). Although the algorithm for the AAS calculation has not been publicized, its values are freely available and displayed at the center of the Donut Badges of published articles. The Donut Badge is a multicolor circle that depicts sources of attention to scholarly items (e.g., light blue represents Twitter activities).58 The Altmetrics.com algorithm weighs outreach and attractiveness of publicly shared and discussed articles, processing data from a wide variety of news outlets, blogs, policy documents, and social media. Although citation counts are not processed for the AAS calculation, these are still drawn from the Dimensions platform (https://app.dimensions.ai) and displayed along with the alternative metrics. Comparative analyses of alternative metric provider services demonstrated the accuracy of covering article mentions at blog posts, news items, and tweets by Altmetric.com.59
Plum Analytics offers the PlumX tool that generates the Plum Print, an infographic with aggregated information on article usage, captures, mentions, social-media attention, and citations.60 Plum Analytics was acquired by Elsevier in 2017 to display article metrics on Scopus. PlumX accurately reflects metrics from EBSCO Information Services and Mendeley reference management platform. The latter was acquired by Elsevier in 2013 and rapidly became one of the most popular resources in medicine.2761
Mendeley reader counts are displayed by both Altmetric.com and PlumX. The reader counts drawn from the Mendeley platform (https://www.mendeley.com/research-papers/), accurately reflect article bookmarks by Mendeley users.62 An analysis of 20,000 randomly chosen articles from various scientific fields demonstrated that 63% of these items were bookmarked and saved into individual online libraries by Mendeley users and that related reader counts better reflected their impact than other alternative metrics.63 Mendeley users save article links in their individual libraries and manage reference lists for own articles, which may include some of the saved items. Manipulations with artificially boosting Mendeley reader counts are unlikely.
Mendeley reader counts predict future citations with variable degree of certainty across academic disciplines and countries.64 The reader counts increase over time due to the rapidly growing Mendeley user numbers, necessitating the normalization for assessing the alternative impact of individual articles.65 The Mean Normalized Reader Score was proposed to normalize Mendeley reader counts against year of publication and subject category.66
Twitter is another popular alternative channel for dissemination and promotion of scholarly articles.6267 The ease of generating tweets, attaching graphics, tagging relevant users and publicly sharing information attract millions of users worldwide.68 Twitter activity can be viewed as microblogging which is a convenient and concise format for commenting on scholarly articles. Tweets with professional comments on article contents weigh more than those without.3 Also, tweets and retweets of active users with numerous followers weigh more than activities of users with small and disorganized networks. Accordingly, article tweets from group (journal) accounts receive more attention and expert comments than those from individual accounts.6970 Specifics of professional topics and professionals' heavy workload may limit the expansion of scholarly activities and networking on Twitter across some academic disciplines.71 At the same time, tweets in large numbers can be indiscriminately generated by the so-called bots (automated accounts, spambots) and trolls (human users) that may spread misinformation, diminish value of public engagement, and skew altmetrics.727374
Correlation analyses of tweets and citations have demonstrated variable results. A strong and statistically significant association was found for articles of prolific authors in medicine.75 However, an analysis of 1.4 million articles covered by PubMed and Web of Science did not reveal any such association, pointing to variable promotion practices in indexed journals and distinct features of traditional and alternative impact metrics.76
Similar to time and field-normalization for citations and Mendeley reader counts, Twitter counts normalization was proposed for cross-field and cross-country comparisons. The normalization method is based on the percentile principle which is applicable to journals with at least 80% of articles mentioned on Twitter.77 Twitter percentiles ranging from zero to 100 are particularly applicable to biomedical, health, life and earth science journals. For papers published in 2012, three top journals with 100% of tweeted articles were identified: The New England Journal of Medicine (16,908 total tweets for 215 papers), The Lancet (10,750 tweets for 233 papers), and The BMJ (12,469 tweets for 325 papers).77
Scholarly activities on social media are associated with prestige of individual articles. Blog and news posts are particularly contributing to institution and country prestige.78 Social-media references are similar to traditional citations in that both metrics positively correlate with JIF and international collaboration.78 Blogs and news outlets may also create a coverage bias. A recent analysis of more than 100,000 randomly chosen publications processed by altmetrics providers pointed to a coverage bias due to uneven representation of countries, languages, and academic disciplines, with over 65% of blogs representing English-speaking countries and over 75% publishing in English.79
The so-called societal impact of articles is becoming an integral part of research evaluation in most developed countries. Publishers integrating their journal platforms with popular social-media sites pave the way for attracting online attention and boosting related scores.80 Skilled authors and advocates of alternative distribution routes are now also able to share and promote their research works while waiting for citations.
While social-media activities are gaining momentum, some journal editors suggest entrusting article and journal promotion responsibilities to social media editors who can streamline the post-publication communication.81 The new editor (ambassador) roles relate to exposing potentially influential articles to a broad public attention.828384
Article-level metrics are increasingly recognized as reflections of real-life influence of scholarly research output. While digitization and aggregation of online information is becoming ubiquitous, any activity surrounding published articles can add value to these metrics. Article metrics aggregate information on individual item usage, expert evaluation, public attention, and citations. Arguably, the concept of article metrics is an attempt to prevent misuse of the JIF and provide research evaluators with a diversity of reliable and widely applicable tools. Comprehensively covering all currently available article metrics and publishing aggregate information along with articles may aid in distinguishing influential items and enriching research evaluation which is moving away from exclusively citation metrics calculations.
Current altmetrics providers, particularly Altmeric.com and Plum Analytics, have emerged as organizations supplying all stakeholders of scholarly communications with reliable metrics and links to post-publication communication threads. While social-media activities are diversifying, altmetrics platforms grow and integrate with journal and database platforms. Numerous open-access journals are currently partnering with altmetrics providers to display dynamically changing article influence in a real-time mode.
Information surrounding published articles cannot be exhaustively covered by altmetrics providers. Emerging online communication and networking platforms with advanced technical specifications are gradually included in the aggregation platforms to generate more comprehensive altmetrics. Similar to traditional citations, altmetrics can be manipulated by artificially increasing views, downloads, and social-media mentions. Limitations of article metrics can be also due to differing social-media practices across academic disciplines and countries. Even within a discipline some article types may be more attractive for social media than others, confounding the metrics calculation and necessitating context evaluation. Hence, any quantitative evaluation of article influence should be complemented by qualitative expert evaluations.
References
1. Tregoning J. How will you judge me if not by impact factor? Nature. 2018; 558(7710):345. PMID: 29921857.
2. Hatch A, Curry S. Changing how we evaluate research is difficult, but not impossible. Elife. 2020; 9:e58654. PMID: 32782065.
3. Bornmann L, Haunschild R. Alternative article-level metrics: the use of alternative metrics in research evaluation. EMBO Rep. 2018; 19(12):e47260. PMID: 30425125.
4. Gasparyan AY, Yessirkepov M, Voronov AA, Koroleva AM, Kitas GD. Comprehensive approach to open access publishing: platforms and tools. J Korean Med Sci. 2019; 34(27):e184. PMID: 31293109.
5. Citrome L. Moving forward with article level metrics: introducing altmetrics. Int J Clin Pract. 2015; 69(8):811. PMID: 26223556.
6. Chandrashekhar Y, Shaw L. Journal editors and altmetrics: moth to the flame? JACC Cardiovasc Imaging. 2019; 12(9):1899–1902. PMID: 31488257.
7. Chavda J, Patel A. Measuring research impact: bibliometrics, social media, altmetrics, and the BJGP. Br J Gen Pract. 2016; 66(642):e59–e61. PMID: 26719483.
8. Hammarfelt B. Using altmetrics for assessing research impact in the humanities. Scientometrics. 2014; 101(2):1419–1430.
9. Dardas LA, Woodward A, Scott J, Xu H, Sawair FA. Measuring the social impact of nursing research: an insight into altmetrics. J Adv Nurs. 2019; 75(7):1394–1405. PMID: 30507052.
10. Gorraiz J, Melero-Fuentes D, Gumpenberger C, Valderrama-Zurián JC. Availability of digital object identifiers (DOIs) in Web of Science and Scopus. J Informetrics. 2016; 10(1):98–109.
11. Erfanmanesh M. Highly-alted articles in library and information science. Webology. 2017; 14(2):66–77.
12. Salahshoori F, Abedini Z. Investigating the social media presence of articles in altmetrics field indexed in Scopus database: an altmetrics study. Libr Philos Pract. 2019; 2779.
13. Boudry C, Chartron G. Availability of digital object identifiers in publications archived by PubMed. Scientometrics. 2017; 110(3):1453–1469.
14. Butler JS, Kaye ID, Sebastian AS, Wagner SC, Morrissey PB, Schroeder GD, et al. The evolution of current research impact metrics: from bibliometrics to altmetrics? Clin Spine Surg. 2017; 30(5):226–228. PMID: 28338492.
15. Knowlton SE, Paganoni S, Niehaus W, Verduzco-Gutierrez M, Sharma R, Iaccarino MA, et al. Measuring the impact of research using conventional and alternative metrics. Am J Phys Med Rehabil. 2019; 98(4):331–338. PMID: 30300231.
16. Haustein S, Peters I, Bar-Ilan J, Priem J, Shema H, Terliesner J. Coverage and adoption of altmetrics sources in the bibliometric community. Scientometrics. 2014; 101(2):1145–1163.
17. Eldakar MAM. Who reads international Egyptian academic articles? An altmetrics analysis of Mendeley readership categories. Scientometrics. 2019; 121(1):105–135.
18. Kohn K. Effects of Publisher Interface and Google Scholar on HTML and PDF Clicks: Investigating Paths That Inflate Usage. J Acad Librariansh. 2018; 44(6):816–823.
19. Yan KK, Gerstein M. The spread of scientific information: insights from the web usage statistics in PLoS article-level metrics. PLoS One. 2011; 6(5):e19917. PMID: 21603617.
20. Fang Z, Guo X, Yang Y, Yang Z, Li Q, Hu Z, et al. Measuring global research activities using geographic data of scholarly article visits. Electron Libr. 2017; 35(4):822–838.
21. Moliterno DJ. The top papers of 2017: by subsequent citations and online views and downloads. JACC Cardiovasc Interv. 2018; 11(3):325–327. PMID: 29413251.
22. Chen B. Usage pattern comparison of the same scholarly articles between Web of Science (WoS) and Springer. Scientometrics. 2018; 115(1):519–537.
23. Duan Y, Xiong Z. Download patterns of journal papers and their influencing factors. Scientometrics. 2017; 112(3):1761–1775.
24. Davis PM. Public accessibility of biomedical articles from PubMed Central reduces journal readership--retrospective cohort analysis. FASEB J. 2013; 27(7):2536–2541. PMID: 23554455.
25. Wang X, Cui Y, Li Q, Guo X. Social media attention increases article visits: An investigation on article-level referral data of PeerJ
. Front Res Metr Anal. 2017; 2:11.
26. Wang X, Fang Z, Guo X. Tracking the digital footprints to scholarly articles from social media. Scientometrics. 2016; 109(2):1365–1376.
27. Amath A, Ambacher K, Leddy JJ, Wood TJ, Ramnanan CJ. Comparing alternative and traditional dissemination metrics in medical education. Med Educ. 2017; 51(9):935–941. PMID: 28719136.
28. Singson M, Thiyagarajan S, Leeladharan M. Relationship between electronic journal downloads and citations in library consortia. Libr Rev. 2016; 65(6/7):429–444.
29. Vaughan L, Tang J, Yang R. Investigating disciplinary differences in the relationships between citations and downloads. Scientometrics. 2017; 111(3):1533–1545.
30. McGillivray B, Astell M. The relationship between usage and citations in an open access mega-journal. Scientometrics. 2019; 121(2):817–838.
31. Fenton JE, O'Connor A, Ullah I, Ahmed I, Shaikh M. Do citation classics in rhinology reflect utility rather than quality? Rhinology. 2005; 43(3):221–224. PMID: 16218517.
32. Gasparyan AY, Yessirkepov M, Voronov AA, Gerasimov AN, Kostyukova EI, Kitas GD. Preserving the integrity of citations and references by all stakeholders of science communication. J Korean Med Sci. 2015; 30(11):1545–1552. PMID: 26538996.
33. Gasparyan AY, Ayvazyan L, Kitas GD. Multidisciplinary bibliographic databases. J Korean Med Sci. 2013; 28(9):1270–1275. PMID: 24015029.
34. Powell AJ, Conlee EM, Chang DG. Three decades of citation classics: the most cited articles in the field of physical medicine and rehabilitation. PM R. 2014; 6(9):828–840. PMID: 25091931.
35. Bohl MA, Turner JD, Little AS, Nakaji P, Ponce FA. Assessing the relevancy of “citation classics” in neurosurgery. Part II: foundational papers in neurosurgery. World Neurosurg. 2017; 104:939–966. PMID: 28438655.
36. Tahamtan I, Bornmann L. What do citation counts measure? An updated review of studies on citations in scientific documents published between 2006 and 2018. Scientometrics. 2019; 121(3):1635–1684.
37. Ahlgren P, Colliander C, Sjögårde P. Exploring the relation between referencing practices and citation impact: a large-scale study based on Web of Science data. J Assoc Inf Sci Technol. 2018; 69(5):728–743.
38. Gates AJ, Ke Q, Varol O, Barabási AL. Nature's reach: narrow work has broad impact. Nature. 2019; 575(7781):32–34. PMID: 31695218.
39. Lazarev VS, Nazarovets SA. Don't dismiss citations to journals not published in English. Nature. 2018; 556(7700):174.
40. Neff M. Quest for publication metrics undermines regional research. Nature. 2018; 554(7691):169.
41. Bornmann L, Haunschild R. Citation score normalized by cited references (CSNCR): the introduction of a new citation impact indicator. J Informetrics. 2016; 10(3):875–887.
42. Bornmann L, Wohlrabe K. Normalisation of citation impact in economics. Scientometrics. 2019; 120(2):841–884.
43. Hicks D, Wouters P, Waltman L, de Rijcke S, Rafols I. Bibliometrics: the Leiden Manifesto for research metrics. Nature. 2015; 520(7548):429–431. PMID: 25903611.
44. Dunaiski M, Geldenhuys J, Visser W. On the interplay between normalisation, bias, and performance of paper impact metrics. J Informetrics. 2019; 13(1):270–290.
45. Hutchins BI, Yuan X, Anderson JM, Santangelo GM. Relative citation ratio (RCR): a new metric that uses citation rates to measure influence at the article level. PLoS Biol. 2016; 14(9):e1002541. PMID: 27599104.
46. Murphy LS, Kraus CK, Lotfipour S, Gottlieb M, Langabeer JR 2nd, Langdorf MI. Measuring scholarly productivity: a primer for junior faculty. Part III: understanding publication metrics. West J Emerg Med. 2018; 19(6):1003–1011. PMID: 30429933.
47. Naik G. The quiet rise of the NIH's hot new metric. Nature. 2016; 539(7628):150. PMID: 27830815.
48. Bornmann L, Haunshild R. Relative citation ratio (RCR): an empirical attempt to study a new field-normalized bibliometric indicator. J Assoc Inf Sci Technol. 2017; 68(4):1064–1067.
49. Purkayastha A, Palmaro E, Falk-Krzesinski HJ, Baas J. Comparison of two article-level, field-independent citation metrics: field-weighted citation impact (FWCI) and relative citation ratio (RCR). J Informetrics. 2019; 13(2):635–642.
50. Waltman L. NIH's new citation metric: a step forward in quantifying scientific impact? Updated 2015. Accessed February 5, 2021. https://www.cwts.nl/blog?article=n-q2u294.
51. Alshareef AM, Alhamid MF, El Saddik A. Toward citation recommender systems considering the article impact in the extended nearby citation network. Peer Peer Netw Appl. 2018; 12(5):1336–1345.
52. Vardell E, Swogger SE. F1000Prime: a faculty of 1000 tool. Med Ref Serv Q. 2014; 33(1):75–84. PMID: 24528266.
53. Akers KG. Electronic resources reviews: F1000prime: expert recommendations of journal articles in biology and medicine. Issues Sci Technol Librariansh. 2018; 90.
54. Du J, Tang X, Wu Y. The effects of research level and article type on the differences between citation metrics and F1000 recommendations. J Assoc Inf Sci Technol. 2016; 67(12):3008–3021.
55. Bornmann L, Haunschild R. Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data. PLoS One. 2018; 13(5):e0197133. PMID: 29791468.
56. Ortega JL. Altmetrics data providers: a meta-analysis review of the coverage of metrics and publication. Prof Inf. 2020; 29(1):e290107.
57. Haustein S, Costas R, Larivière V. Characterizing social media metrics of scholarly papers: the effect of document properties and collaboration patterns. PLoS One. 2015; 10(3):e0120495. PMID: 25780916.
58. Trueger NS, Thoma B, Hsu CH, Sullivan D, Peters L, Lin M. The altmetric score: a new measure for article-level dissemination and impact. Ann Emerg Med. 2015; 66(5):549–553. PMID: 26004769.
59. Ortega JL. Reliability and accuracy of altmetric providers: a comparison among Altmetric.com, PlumX and Crossref Event Data. Scientometrics. 2018; 116(3):2123–2138.
60. Lindsay JM. PlumX from plum analytics: not just altmetrics. J Electron Resour Med Libr. 2016; 13(1):8–17.
61. Azer SA, Azer S. Top-cited articles in medical professionalism: a bibliometric analysis versus altmetric scores. BMJ Open. 2019; 9(7):e029433.
62. Meschede C, Siebenlist T. Cross-metric compatability and inconsistencies of altmetrics. Scientometrics. 2018; 115(1):283–297.
63. Zahedi Z, Costas R, Wouters P. How well developed are altmetrics? A cross-disciplinary analysis of the presence of ‘alternative metrics’ in scientific publications. Scientometrics. 2014; 101(2):1491–1513.
64. Pooladian A, Borrego Á. A longitudinal study of the bookmarking of library and information science literature in Mendeley. J Informetrics. 2016; 10(4):1135–1142.
65. Zahedi Z, Costas R, Wouters P. Mendeley readership as a filtering tool to identify highly cited publications. J Assoc Inf Sci Technol. 2017; 68(10):2511–2521.
66. Haunschild R, Bornmann L. Normalization of Mendeley reader counts for impact assessment. J Informetrics. 2016; 10(1):62–73.
67. Maggio LA, Meyer HS, Artino AR Jr. Beyond citation rates: a real-time impact analysis of health professions education research using altmetrics. Acad Med. 2017; 92(10):1449–1455. PMID: 28817430.
68. Zimba O, Radchenko O, Strilchuk L. Social media for research, education and practice in rheumatology. Rheumatol Int. 2020; 40(2):183–190. PMID: 31863133.
70. Ortega JL. The presence of academic journals on Twitter and its relationship with dissemination (tweets) and research impact (citations). Aslib J Inf Manag. 2017; 69(6):674–687.
71. Khan MS, Shahadat A, Khan SU, Ahmed S, Doukky R, Michos ED, et al. The Kardashian index of cardiologists. celebrities or experts? JACC Case Rep. 2020; 2(2):330–332. PMID: 32292918.
72. Jamison AM, Broniatowski DA, Quinn SC. Malicious actors on twitter: a guide for public health researchers. Am J Public Health. 2019; 109(5):688–692. PMID: 30896994.
73. Pagoto S, Waring ME, Xu R. A call for a public health agenda for social media research. J Med Internet Res. 2019; 21(12):e16661. PMID: 31855185.
74. Ahmed S, Gupta L. Social media for medical journals. Cent Asian J Med Hypotheses Ethics. 2020; 1(1):26–32.
75. Ravikumar SS, Khonglam B. Tweets of an article and its citation: an altmetric study of most prolific authors. Libr Philos Pract. 2018; 1745.
76. Haustein S, Peters I, Sugimoto CR, Thelwall M, Larivière V. Tweeting biomedicine: an analysis of tweets and citations in the biomedical literature. J Assoc Inf Sci Technol. 2014; 65(4):656–669.
77. Bornmann L, Haunschild R. How to normalize Twitter counts? A first attempt based on journals in the Twitter index. Scientometrics. 2016; 107(3):1405–1422. PMID: 27239079.
78. Didegah F, Bowman TD, Holmberg K. On the differences between citations and altmetrics: an investigation of factors driving altmetrics versus citations for finnish articles. J Assoc Inf Sci Technol. 2018; 69(6):832–843.
79. Ortega JL. Blogs and news sources coverage in altmetrics data providers: a comparative analysis by country, language, and subject. Scientometrics. 2020; 122(1):555–572.
80. Asyyed Z, McGuire C, Samargandi O, Al-Youha S, Williams JG. The use of Twitter by plastic surgery journals. Plast Reconstr Surg. 2019; 143(5):1092e–1098e.
81. Pineda C, Pérez-Neri I, Sandoval H. Challenges for social media editors in rheumatology journals: an outlook. Clin Rheumatol. 2019; 38(6):1785–1789. PMID: 31093788.
82. Dharnidharka VR. A social media editor for pediatric transplantation. Pediatr Transplant. 2019; 23(1):e13343. PMID: 30635957.
83. Kochanek PM, Kudchadkar SR, Kissoon N. New developments for pediatric critical care medicine in 2019 and beyond. Pediatr Crit Care Med. 2019; 20(4):311.
84. Lopez M, Chan TM, Thoma B, Arora VM, Trueger NS. The social media editor at medical journals: responsibilities, goals, barriers, and facilitators. Acad Med. 2019; 94(5):701–707. PMID: 30334841.