Loading [MathJax]/jax/output/HTML-CSS/fonts/TeX/fontdata.js

Journal List > J Korean Med Sci > v.39(33) > 1516088281

Kocak: Publication Ethics in the Era of Artificial Intelligence

Abstract

The application of new technologies, such as artificial intelligence (AI), to science affects the way and methodology in which research is conducted. While the responsible use of AI brings many innovations and benefits to science and humanity, its unethical use poses a serious threat to scientific integrity and literature. Even in the absence of malicious use, the Chatbot output itself, as a software application based on AI, carries the risk of containing biases, distortions, irrelevancies, misrepresentations and plagiarism. Therefore, the use of complex AI algorithms raises concerns about bias, transparency and accountability, requiring the development of new ethical rules to protect scientific integrity. Unfortunately, the development and writing of ethical codes cannot keep up with the pace of development and implementation of technology. The main purpose of this narrative review is to inform readers, authors, reviewers and editors about new approaches to publication ethics in the era of AI. It specifically focuses on tips on how to disclose the use of AI in your manuscript, how to avoid publishing entirely AI-generated text, and current standards for retraction.

Go to : Goto

INTRODUCTION

The emergence of ethical concerns regarding the use of artificial intelligence (AI) dates back to the early days of its development. Most sources claim that the modern development of AI began in the early 1950s with the work of Alan Mathison Turing. He performed the “Turing test,” which showed that computers could think like humans. His paper “Computing Machinery and Intelligence” brought about debates about machine intelligence that would eventually lead to ethical considerations.1 The term “artificial intelligence” was first coined by John McCarthy at a conference in 1956.23 He has done outstanding research in the field of AI.2 With the advancement of computer technology, AI became more widely applied in the years of 1970 and 1980. This raised concerns about privacy and decision-making biases. In 1976, Joseph Weizenbaum’s book “Computer Power and Human Reason” addressed the moral responsibility of AI developers.4 In the 1990s, Dr. Richard Wallace created ALICE (Artificial Linguistic Internet Computer Entity), the first chatbot to interact with humans.5 Since the 1990s, ethical concerns in the use of AI have become more prominent and the need for ethical regulations and guidelines has been discussed by stakeholders.
What is the modern definition of AI? As pointed out by Chen J “The goal of AI is to build systems that can learn and adapt as they make well-informed decisions, that is, systems that have certain levels of autonomy (i.e., the capability of task and motion planning) as well as intelligence (i.e., the capability of decision-making and reasoning).”6 The current definition in Britannica is as follows; AI is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”7 Many different AI applications are currently used in healthcare, such as robotics, image processing, big data analysis, machine learning, voice recognition, and predictive modeling.8
One of the most widely used AI tools in scientific publishing is the Chatbots, which can generate text, code, video/images, and full research articles. From information retrieval to data analysis and plagiarism detection, AI tools facilitate the work of not only researchers but also editors and publishers in many areas (Table 1).91011 However, we should be aware of not only the strengths but also the weaknesses or limitations of AI tools.1213 They lack consciousness. They lack creative or original ideas because they can only create content based on the libraries they were trained in. The information compiled by the chatbot may be inaccurate, out of date, and may contain subjective biases. They may even list references that do not exist. More worryingly, they can facilitate the production of fraudulent manuscripts such as paper mills.14
Table 1

The areas of using AI-based algorithms in scholarly publishing

jkms-39-e249-i001
• Literature search and information retrieval
• Data analysis
• Summarizing the content
• Bibliography and citation management
• Create abstracts, images, videos and manuscripts
• Image quality control
• Content formatting
• Language translation and grammar check
• Target journal selection
• Peer review and statistical quality assessment
• Similarity check to prevent plagiarism
• Detection of data and image fabrication
• Detection of paper mills
AI = artificial intelligence.

Download Table

While the responsible use of AI brings many innovations and benefits for science and humanity, its unethical use, such as fabricated articles, poses a serious threat to scientific integrity and literature.151617 Even without malicious use, the chatbot output itself is at risk of containing biases, distortions, irrelevancies, misrepresentations, and plagiarism.1819 Therefore, the use of complex AI algorithms raises concerns about bias, transparency, and accountability, which calls for the development of new ethical guidelines to protect scientific integrity.
We are now witnessing AI technologies reshaping the field of academic publishing. As researchers, authors, reviewers, and editors, we are in a period where we all have to renew and improve our knowledge on this subject. We need to recognize the rapidly changing dynamics in scholarly publishing and address some concerns and challenges to ensure that AI tools and chatbots are used ethically and responsibly in academia. Some of these issues include the debate on how to disclose the use of AI in a manuscript, how to prevent the publication of fabricated manuscripts and the changing standards of retraction.
Go to : Goto

AIM

The main purpose of this narrative review article is to inform readers, authors, reviewers, and editors about new approaches to publication ethics in the age of AI. It specifically focuses on how to disclose the use of AI in your writing, how to avoid publishing text entirely generated by AI, and tips on current standards for retraction.
Go to : Goto

SEARCH STRATEGY

I have prepared a list of keyword combinations such as ‘Artificial Intelligence,’ ‘Ethics in Publishing,’ ‘Scientific Fraud,’ ‘Scientific Integrity,’ ‘Scientific Misconduct,’ ‘Research Misconduct,’ and ‘Retraction of Publication.’ I took the presence of Medical Subject Headings terms into account when choosing my search queries. I searched MEDLINE/PubMed, Scopus, Web of Science, and the Directory of Open Access Journals. I also checked the “similar articles section” for additional PubMed citations closely related to the articles listed with the initial ‘MeSH’ terms. Google Scholar search was conducted for some terms that have no equivalent in MeSH (e.g. chatbot, paper mills, fabricated manuscripts). I did not set any time limits or intervals when creating our search strategy. All article types were included in the searches. Publications that are not suitable for my purpose, not in English and not full text are excluded. I focused specifically on articles related to standards for disclosure of AI, avoiding the publication of fabricated manuscripts, and new and changing standards for retracting articles.
Go to : Goto

AI AND DISCLOSURE

Journals vary in their policies on the use of generative AI for scientific writing. Some publishers prohibit the use of AI without explicit editorial authorization,20 while others require detailed annotation in the manuscript.2122 In his article, Hosseini argues that banning these tools could encourage the undisclosed use of chatbots, which would undermine transparency and integrity in research.23 He also emphasized that it would undermine the principle of equality and diversity in science for non-native speakers of English.
WAME revised its recommendations on “Chatbots and Generative AI in Relation to Scientific Publication” in May 2023.18 These recommendations can be considered as general principles. The first version emphasizes transparency, honesty, and responsibility of authors. For the second version, the suggestion that editors and peer reviewers should inform authors and be transparent when using AI in the manuscript evaluation process was added (Table 2).
Table 2

WAME (World Association of Medical Editors) recommendations on “Chatbots and Generative AI in Relation to Scientific Publication”a (version 2:2023)

jkms-39-e249-i002
• Chatbots cannot be authors
• Authors should be transparent when chatbots are used and provide information about how they were used
• Authors are responsible for material provided by a chatbot in their paper (including the accuracy of what is presented and the absence of plagiarism) and for appropriate attribution of all sources (including original sources for material generated by the chatbot)
• Editors need appropriate tools to help them detect content generated or altered by AI. Such tools should be made available to editors regardless of ability to pay for them, for the good of science and the public, and to help ensure the integrity of healthcare information and reducing the risk of adverse health outcomes
Editors and peer reviewers should specify, to authors and each other, any use of chatbots in the evaluation of the manuscript and generation of reviews and correspondence. If they use chatbots in their communications with authors and each other, they should explain how they were used
AI = artificial intelligence.
aWritten in bold italics is item added for version 2.

Download Table

Where in your article and how do we disclose the use of AI in your manuscript? There is a bit more disagreement about where to disclose rather than how to disclose. Some journals require authors to detail the use of AI in the acknowledgments section,242526 while others prefer it to be described in the body of the text.2728 The logic behind the idea that it would not be appropriate to use it in the acknowledgments section is that they cannot be accepted as authors because they cannot take responsibility or accountability for the research.23 APA recommends disclosure in the methods section in research articles and the introduction section in other types of articles.28 If AI was used for data collection, analysis, or figure generation, ICMJE and COPE recommend describing its use in the methods section.
How do we disclose the use of AI? First of all, authors should read the journal AI policies before submission. Journals/Publishers expect you to be transparent and honest. You are asked to detail what you did and how you did it, and to indicate where the AI-generated content fits into your manuscript (Table 3). You must keep all your prompts and answers and most journals require you to declare them.
Table 3

Author’s guide before writing and submitting an AI-assisted manuscript

jkms-39-e249-i003
Journals generally ask you to declare/indicate
• Which AI model was used, when and by whom?
• The rationale for the use of AI and how it is used
• All prompts and responses
• Where in your article the AI-generated content appears?
During the manuscript writing process
• Check the accuracy of all references
• Ensure that all concepts are properly attributed
• Ensure that the language used is neutral and inclusive
• Check the similarity of text for plagiarism
• Read the journal and/or publisher’s AI policy carefully
AI = artificial intelligence.

Download Table

Another important issue is how to cite AI. Of course, the journal instructions should be read, but it is necessary to include information such as the version used, the model used, and the date of use. For example, the following format suggested by Hosseini seems informative enough23:
“OpenAI (2023). ChatGPT (GPT-4, 12 May Release) [Large language model]. Response to query X.Y. Month/Day/Year. https://chat.openai.com/chat
Go to : Goto

AI AND AVOIDING FABRICATED MANUSCRIPTS

We all know that non-native English speakers face disadvantages in the primarily Anglophone publishing business and often have to send their work to expensive and time-consuming translation and editing agencies. AI tools like ChatGPT can play an important role in helping non-native English academics write and edit their papers. It also encourages researchers who shy away from peer review due to language difficulties to take on the role of reviewer. Therefore, AI tools, especially if freely available, can promote and improve scientific equity.29 The important thing here is that the author informs the journal and the publisher transparently.
On the other hand, AI tools can easily be used in an unethical way, resulting in misconduct and even fraud. AI tools can now produce full papers, which threatens the integrity of science. Paper mills are one of the final points of scientific fraud. This business began to emerge as a response to the publish-or-perish policy. Paper mills are heavily dependent on AI-generated texts that often contain fake or low-quality data.30 Authors pay to have their names appear in these papers. They try to bribe journal editors to get manuscripts accepted quickly. So, the need to distinguish human writing from AI is critical. AI tools can play an important role in detecting or suspecting this scientific misconduct/fraud.
Currently, AI tools such as PapermillAlarm, GPTZero, GPT-2 Output Detector, Profig, FigCheck, and ImaCheck are used by some publishers to detect fabricated papers and image manipulation.313233 However, journals and publishers should be aware of their limitations and that they are not infallible. They can produce false positive or false negative results, and many errors in images flagged by AI tools can result in false positive results.32 Gao et al.34 evaluated the abstracts generated by ChatGPT for 50 scientific medical articles. They collected titles and originals extracted abstracts from recent issues of five high-impact journals and compared them with the original abstracts. It was noted that only 68% of ChatGPT-generated abstracts and 86% of human-written abstracts were correctly identified. The effectiveness of AI text content detectors, such as OpenAI, Writer, Copyleaks, GPTZero, and CrossPlag, was the subject of another study.35 The study’s findings show that there is a wide range in the tools’ capacity to distinguish between text that is correctly identified as AI-generated or human-written. Overall, the tools perform better at classifying text that is GPT 3.5-generated than content that is GPT 4-generated or written by humans. In the preliminary study by Habibzadeh, a total of 50 text fragments were used to determine the performance of Gptzero in distinguishing machine-generated texts from human-written texts.36 It recorded an accuracy rate of 80% with positive and negative probabilities of 6.5 and 0.4, respectively. He concluded GPTZero had a low false-positive (classifying a human-written text as machine-generated) and a high false-negative rate (classifying a machine-generated text as human-written).
So, AI tools for discriminating from AI- from human-derived text is currently not good enough and need to enhance their accuracy and reliability.3537 They can also be easily bypassed by using an online rewording tool or by rewording it oneself.38 Therefore, suspected misconduct identified by AI tools should be carefully scrutinized by humans to confirm accuracy as suggested by COPE.39 As pointed out by Gasparyan, the advice of experienced editors on how the credibility of academic publications is threatened by blind reliance on online processing should not be ignored.40 Sometimes the best way to ensure that this is a legitimate manuscript is to ask the author to provide the raw data of the study.37
What makes the problem a little more complicated for us is that establishing strict rules on ethics in scientific publishing is not an easy task. In the age of AI, where do you draw the line between ethical and unethical behavior? More specifically, what should be the maximum percentage of AI content in an article? Is there a standard percentage set by journals for AI content? Unfortunately, editorial guidelines from global associations such as ICMJE, COPE and WAME generally outline the general framework for publishing ethics, but do not provide specific advice to authors and editors specifically on AI and ethics.40
Go to : Goto

AI AND CURRENT STANDARDS FOR RETRACTION

In 2013, Steen et al.41 conducted an important study showing that the reasons for retraction have expanded in recent years to include plagiarism and duplicate publication. One of the study’s conclusions was that lower barriers to retraction were apparent in an increase in retraction for “new” offenses such as plagiarism. Spanish group recently published two important trials. In the study by Candal-Pedreira et al., retracted articles originating from paper mills were evaluated. They reported that the ratio of paper mill retractions to all-cause retractions reached 21.8% (772/3,544) in 2021.42 In their second trial, retracted papers from European institutions between 2000 and 2021 were analyzed. They noted retraction rates increased from 10.7 to 44.8 per 100,000 publications between 2000 and 2020. Research misconduct was the reason in 2/3 of retractions. From copyright and authorship issues in 2000 (2.5 per 100,000 publications) to duplications in 2020 (8.6 per 100,000 publications), they disclosed causes for retractions that changed over time.43
In the recently published systematic review of studies of retraction notices, misconduct accounted for 60% of all retractions, confirming the results of the studies mentioned above.44 According to the claim by “Retraction Watch,” hundreds of IEEE publications produced in the previous years contained plagiarized material, citation fraud, and distorted wording.45 Voung et al.46 reported that manipulated peer review was the most common reason for retraction in the 2010s. By analyzing 18,603 retractions compiled from the Retraction Watch database until 2019, they reported that manipulated peer review was responsible for 676 retractions in the period 2012–2019. Hard-to-believe findings were presented in Van Noorden’s research, which was just published in Nature.47 A new annual record was set in 2023, with more than 10,000 retractions of research articles. The main reasons for this were the “paper mills” engaged in systematic manipulation of the peer review and publication processes. Even more worrying, integrity experts claimed this was just the “tip of the iceberg.” In 2015, WAME published an action plan to prevent “fake” reviewers from conducting reviews by searching for and verifying the ORCID IDs of potential reviewers in response to growing concerns about these activities worldwide.40 However, the most decisive point here will be the approach and uncompromising attitude of journal editors towards publication ethics and scientific integrity.
In non-Anglophone countries, plagiarism in particular stands out as a reason for retraction. Koçyiğit et al.48 drew attention to the increase in the number of retracted articles from Turkey in recent years and reported the most common reasons for retraction as plagiarism, duplication, and error. Gupta et al.49 conducted to analyze plagiarism perceptions among researchers and journal editors, particularly from non-Anglophone countries. This survey confirmed again that despite increased global awareness of plagiarism, non-Anglophone medical researchers do not understand the issue sufficiently. While most agree that copying text and images is plagiarism, other behaviors, such as stealing ideas and paraphrasing previously published work, are considered outside the scope of plagiarism. They conclude that closing the knowledge gap by providing up-to-date training and widespread use of advanced anti-plagiarism software can address this unmet need.
These studies have shown us the changing concepts and practices for retractions over the last decade. What was the driver behind this? The use of advanced technology in publishing helps us to detect plagiarism and duplication. On the other hand, the misuse of technology raises some ethical issues such as paper mills, image manipulation, confidentiality issues, and non-disclosure of competing interests. Such an unethical act not only compromises the integrity of publishing and science but may also require the retraction of the article.
The first version of the COPE retraction guideline was published in 2009. A revised version was published in 2019 to set the current standards (Table 4).50 As can be seen in Table 4, image manipulation, lack of authorization for material or data use, some legal issues, compromised peer review, and non-disclosure of conflict of interest were added as reasons for retraction. However, as Teixeria da Silva notes, the COPE, ICMJE, and CSE ethics guidelines are still incomplete in that they do not specifically address fake articles, authors, emails and affiliations associated with stings and hoaxes.51
Table 4

COPE (Committee on Publication Ethics) retraction guideline (version 2: 2019)a

jkms-39-e249-i004
Editors should consider retracting a publication if:
• They have clear evidence that the findings are unreliable, either as a result of major error (eg, miscalculation or experimental error), or as a result of fabrication (eg, of data) or falsification (eg, image manipulation)
• It constitutes plagiarism
• It reports unethical research
• The findings have previously been published elsewhere without proper attribution to previous sources or disclosure to the editor, permission to republish, or justification (ie, cases of redundant publication)
It contains material or data without authorisation for use
Copyright has been infringed or there is some other serious legal issue (eg, libel, privacy)
It has been published solely on the basis of a compromised or manipulated peer review process
The author(s) failed to disclose a major competing interest (a.k.a. conflict of interest) that, in the view of the editor, would have unduly affected interpretations of the work or recommendations by editors and peer reviewers
aWritten in bold italics are items added for version 2.

Download Table

Another popular topic in recent years is self-retraction. As many experts have emphasized, self-retraction due to honest errors deserves more credit than it currently receives. Fanelli argues that such publications should be viewed as legitimate publications that scholars will treat as evidence of integrity.52
Go to : Goto

FUTURE DIRECTIONS AND LIMITATIONS OF THE STUDY

Currently, international organizations such as COPE, ICMJE, and CSE share the same views on authorship, AI disclose, transparency and responsibility, and ethical use of AI in their recommendations on AI use in scholarly publishing (Table 5). While acknowledging AI’s role in decision-making, COPE emphasizes the necessity of responsibility, transparency, and human oversight when incorporating AI tools into the peer review process. The ICMJE defines the use of AI in peer review but advocates restricting the use of AI by editors. The CSE recommendations are similar to those of ICJME and COPE but do not refer to reviewers and editors. As shown in the Table 2, WAME details how to disclose the use of AI for your article and recommends that editors should have these AI tools.
Table 5

Recommendations from COPE, ICMJE, and CSE on the use of AI tools in scholarly publishing

jkms-39-e249-i005
Topic COPE ICMJE CSE
AI and authorship AI tools cannot be listed as authors AI technologies cannot be listed as authors AI tools should not be listed as authors
AI use and transparency/responsibility Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics. Transparency of processes must ensure technical robustness and rigorous data governance. Humans are responsible for any submitted material that included the use of AI-assisted technologies. Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. Authors must be accountable for all aspects of a manuscript, including the accuracy of the content that was created with the assistance of AI, the absence of plagiarism, and for appropriate attributions of such sources.
AI and disclose Authors who use AI tools in the writing of a manuscript, production of images or graphical elements of the paper, or in the collection and analysis of data, must be transparent in disclosing in the Materials and Methods (or similar section) of the paper how the AI tool was used and which tool was used. Authors who use such technology should describe, in both the cover letter and the submitted work in the appropriate section if applicable, how they used it. For example, if AI was used for writing assistance, describe this in the acknowledgment section. If AI was used for data collection, analysis, or figure generation, authors should describe this use in the methods. Authors should disclose usage of AI tools and machine learning tools such as ChatGPT, Chatbots, and Large Language Models (LLM). CSE recommends that journals ask authors to attest at initial submission and revision to the usage of AI and describe its use in either a submission question or in the cover letter. Journals may want to ask for the technical specifications (name, version, model) of the LLM or AI and the method of the application (query structure, syntax).
AI and editors/peer-reviewers AI chatbots pose for journal editors, including issues with plagiarism detection. It suggests the application of human judgment and suitable software to overcome these challenges. Reviewers must maintain the confidentiality of the manuscript as outlined above, which may prohibit the uploading of the manuscript to software or other AI technologies where confidentiality cannot be assured. Reviewers must request permission from the journal prior to using AI technology to facilitate their review. Reviewers should be aware that AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. Editors should be aware that using AI technology in the processing of manuscripts may violate confidentiality.
AI = artificial intelligence, COPE = Committee on Publication Ethics, ICMJE = International Committee of Medical Journal Editors, CSE = Council of Science Editors.

Download Table

AI technology will undoubtedly develop and play a bigger role in day-to-day living. Therefore, these developments in AI will reshape scientific publishing and ethics. More scientists, reviewers, and editors will endeavor to be more transparent about their work as well as be more aware of the ethical issues surrounding the use of AI. Education on ethics and AI will become more important and researchers will have to consider ethical issues in their projects. Shortly, equity and unrestricted access to these technologies may emerge as the most significant issues in AI ethics, particularly for non-native English speakers. So, it will be crucial to have common regulations and the same ethical standards for all countries. Governments and funding organizations will have to develop policies to further support ethical research on AI. Perhaps in the near future, a new concept, empathic AI, as proposed by Kasani et al.53 could help protect research and publication ethics by overcoming the limitations of human empathy.
This narrative review has several restrictions. Excluding non-English articles could lead to bias. Another drawback was that some articles could not be accessed in full text, therefore they were excluded. It is also possible that publications in journals that are not listed in the indexes used for literature search may have been overlooked. These factors could affect the comprehensiveness and objectivity of the review.
Go to : Goto

CONCLUSION

The application of new technologies to science affects the way and methodology in which research is conducted. Unfortunately, the development and writing of ethical codes cannot keep up with the pace of development and application of technology. Moreover, preparing guidelines is not an easy task because codes of ethics are not completely black and white. Therefore, the fight against scientific misconduct is multi-faceted, continuous, and requires teamwork.54 Table 3 is intended as a checklist for authors before writing and submitting an AI-assisted manuscript. I hope this review will guide authors, reviewers, and editors on the responsible use of AI and help raise awareness on this issue. Journals and publishers should have clear and transparent policies on the ethical use of AI for the drafting, editing, and reviewing of manuscripts. They should also avoid unfairly blaming authors when taking action against the unethical use of AI. Educating staff and editorial boards on this issue is not only a need but also an obligation.
Go to : Goto

Notes

Disclosure: The author has no potential conflicts of interest to disclose.

Go to : Goto

References

1. Turing AM. Computing machinery and intelligence. Mind. 1950; 49(236):433–460.
2. Stone P, Brooks R, Brynjolfsson E, Calo R, Etzioni O, Hager G, et al. Artificial intelligence and life in 2030. One hundred year study on artificial intelligence: report of the 2015–2016 study panel, Stanford University, Stanford, CA. Updated 2016. Accessed June 11, 2024. https://ai100.stanford.edu/sites/g/files/sbiybj18871/files/media/file/ai100report10032016fnl_singles.pdf .
3. Singh J, Sillerud B, Singh A. Artificial intelligence, chatbots and ChatGPT in healthcare—narrative review of historical evolution, current application, and change management approach to increase adoption. J Med Artif Intell. 2023; 6:30.
4. Weizenbaum J. Computer Power and Human Reason: From Judgment to Calculation. 1st ed. San Francisco, CA, USA: W.H. Freeman and Company;1976.
5. Digital Scholar. What is ChatGPT: the history of ChatGPT – Open AI. Updated 2023. Accessed June 12, 2024. https://digitalscholar.in/history-of-chatgpt/#the-history-of-chatgpt-itspredecessorshines .
6. Chen J, Sun J, Wang G. From unmanned systems to autonomous intelligent systems. Engineering (Beijing). 2022; 12:16–19.
7. Copeland BJ. Fact-checked by the Editors of Encyclopedia Britannica. Artificial intelligence. Britannica. Updated 2024. Accessed May 17, 2024. https://www.britannica.com/question/What-is-artificial-intelligence .
8. Park CW, Seo SW, Kang N, Ko B, Choi BW, Park CM, et al. Artificial intelligence in health care: current applications and issues. J Korean Med Sci. 2020; 35(42):e379. PMID: 33140591.
9. Kaebnick GE, Magnus DC, Kao A, Hosseini M, Resnik D, Dubljević V, et al. Editors’ statement on the responsible use of generative AI technologies in scholarly journal publishing. Am J Bioeth. 2024; 24(3):5–8. PMID: 38085888.
10. Chetwynd E. Ethical use of artificial intelligence for scientific writing: current trends. J Hum Lact. 2024; 40(2):211–215. PMID: 38482810.
11. Doskaliuk B, Zimba O. Beyond the keyboard: academic writing in the era of ChatGPT. J Korean Med Sci. 2023; 38(26):e207. PMID: 37401498.
12. Koçak Z, Altay S. Balkan Medical Journal policy on the use of Chatbots in scientific publications. Balkan Med J. 2023; 40(3):149–150. PMID: 37067468.
13. Ali MJ, Djalilian A. Readership awareness series - Paper 4: Chatbots and ChatGPT - ethical considerations in scientific publications. Ocul Surf. 2023; 28:153–154. PMID: 37028488.
14. Ali MJ, Djalilian A. Readership awareness series - Paper 3: paper mills. Ocul Surf. 2023; 28:56–57. PMID: 36739967.
15. Hammad M. The impact of artificial intelligence (AI) programs on writing scientific research. Ann Biomed Eng. 2023; 51(3):459–460. PMID: 36637603.
16. BaHammam AS. Balancing innovation and integrity: the role of AI in research and scientific writing. Nat Sci Sleep. 2023; 15(15):1153–1156. PMID: 38170140.
17. Korkmaz S. Artificial intelligence in healthcare: a revolutionary ally or an ethical dilemma? Balkan Med J. 2024; 41(2):87–88. PMID: 38269851.
18. Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, et al. Chatbots, generative AI, and scholarly manuscripts. WAME recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. Updated 2023. Accessed May 18, 2024. https://wame.org/page3.php?id=106 .
19. Limongi R. The use of artificial intelligence in scientific research with integrity and ethics. Future Stud Res J. 2024; 16(1):e845.
20. Thorp HH. ChatGPT is fun, but not an author. Science. 2023; 379(6630):313. PMID: 36701446.
21. Flanagin A, Kendall-Taylor J, Bibbins-Domingo K. Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA. 2023; 330(8):702–703. PMID: 37498593.
22. Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. Nature. 2023; 613(7945):612. PMID: 36694020.
23. Hosseini M, Resnik DB, Holmes K. The ethics of disclosing the use of artificial intelligence tools in writing scholarly manuscripts. Res Ethics Rev. 2023; 19(4):449–465.
24. Jenkins R, Lin P. AI-assisted authorship: how to assign credit in synthetic scholarship (SSRN Scholarly Paper No. 4342909). Updated 2023. Accessed May 14, 2024. DOI: 10.2139/ssrn.4342909.
25. Hughes-Castleberry K. From cats to chatbots: how non-humans are authoring scientific papers. Discover Magazine. Updated 2023. Accessed May 15, 2024. https://www.discovermagazine.com/the-sciences/from-cats-to-chatbots-how-non-humans-are-authoring-scientific-papers .
26. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature. 2023; 613(7945):620–621. PMID: 36653617.
27. Katz DS, Chue Hong NP, Clark T, Muench A, Stall S, Bouquin D, et al. Recognizing the value of software: a software citation guide. F1000 Res. 2020; 9:1257.
28. McAdoo T. How to cite ChatGPT. APA Style Blog. Updated 2024. Accessed May 15, 2024. https://apastyle.apa.org/blog/how-to-cite-chatgpt .
29. Berdejo-Espinola V, Amano T. AI tools can improve equity in science. Science. 2023; 379(6636):991.
30. Parkinson A, Wykes T. The anxiety of the lone editor: fraud, paper mills and the protection of the scientific record. J Ment Health. 2023; 32(5):865–868. PMID: 37697484.
31. Hosseini M, Resnik DB. Guidance needed for using artificial intelligence to screen journal submissions for misconduct. Res Ethics Rev. 2024; 17470161241254052.
32. Jones N. How journals are fighting back against a wave of questionable images. Nature. 2024; 626(8000):697–698. PMID: 38347210.
33. Homolak J. Exploring the adoption of ChatGPT in academic publishing: insights and lessons for scientific writing. Croat Med J. 2023; 64(3):205–207. PMID: 37391919.
34. Gao CA, Howard FM, Markov NS, Dyer EC, Ramesh S, Luo Y, et al. Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. NPJ Digit Med. 2023; 6(1):75. PMID: 37100871.
35. Elkhatat AM, Khaled Elsaid K, Almeer S. Evaluating the efficacy of AI content detection tools in differentiating between human and AI-generated text. Int J Educ Integrity. 2023; 19(1):17.
36. Habibzadeh F. GPTZero performance in identifying artificial intelligence-generated medical texts: a preliminary study. J Korean Med Sci. 2023; 38(38):e319. PMID: 37750374.
37. Liverpool L. AI intensifies fight against ‘paper mills’ that churn out fake research. Nature. 2023; 618(7964):222–223. PMID: 37258739.
38. Elali FR, Rachid LN. AI-generated research paper fabrication and plagiarism in the scientific community. Patterns (N Y). 2023; 4(3):100706. PMID: 36960451.
39. Eaton SE, Soulière M. Artificial intelligence (AI) and fake papers. Updated 2023. Accessed June 11, 2024. https://publicationethics.org/resources/forum-discussions/artificial-intelligence-fake-paper .
40. Gasparyan AY, Yessirkepov M, Voronov AA, Koroleva AM, Kitas GD. Updated editorial guidance for quality and reliability of research output. J Korean Med Sci. 2018; 33(35):e247. PMID: 30140192.
41. Steen RG, Casadevall A, Fang FC. Why has the number of scientific retractions increased? PLoS One. 2013; 8(7):e68397. PMID: 23861902.
42. Candal-Pedreira C, Ross JS, Ruano-Ravina A, Egilman DS, Fernández E, Pérez-Ríos M. Retracted papers originating from paper mills: cross sectional study. BMJ. 2022; 379:e071517. PMID: 36442874.
43. Freijedo-Farinas F, Ruano-Ravina A, Perez-Rios M, Ross JS, Candal-Pedreira C. Biomedical retractions due to misconduct in Europe: characterizations and trends in the last 20 years. Scientometrics. 2024; 129(5):2867–2882.
44. Hwang SY, Yon DK, Lee SW, Kim MS, Kim JY, Smith L, et al. Causes for retraction in the biomedical literature: a systematic review of studies of retraction notices. J Korean Med Sci. 2023; 38(41):e333. PMID: 37873630.
45. Retraction Watch. Plague of anomalies in conference proceedings hint at ‘systemic issues.’. Updated 2023. Accessed May 16, 2024. https://retractionwatch.com/2023/06/15/plague-of-anomalies-in-conference-proceedings-hint-at-systemic-issues .
46. Voung QH, La LV, H\xe1\xbb\x93 MT, Vuong TT, Ho MT. Characteristics of retracted articles based on retraction data from online sources through February 2019. Sci Ed. 2020; 7(1):34–44.
47. Van Noorden R. More than 10,000 research papers were retracted in 2023 - a new record. Nature. 2023; 624(7992):479–481. PMID: 38087103.
48. Kocyigit BF, Akyol A. Analysis of retracted publications in the biomedical literature from Turkey. J Korean Med Sci. 2022; 37(18):e142. PMID: 35535370.
49. Gupta L, Tariq J, Yessirkepov M, Zimba O, Misra DP, Agarwal V, et al. Plagiarism in non-anglophone countries: a cross-sectional survey of researchers and journal editors. J Korean Med Sci. 2021; 36(39):e247. PMID: 34636502.
50. COPE Council. COPE retraction guidelines — English. Updated 2019. Accessed 20 May, 2024. DOI: 10.24318/cope.2019.1.4.
51. Teixeira da Silva JA. Assessing the ethics of stings, including from the prism of guidelines by ethics-promoting organizations (COPE, ICMJE, CSE). Publ Res Q. 2021; 37(1):90–98.
52. Fanelli D. Set up a ‘self-retraction’ system for honest errors. Nature. 2016; 531(7595):415. PMID: 27008933.
53. Kasani PH, Cho KH, Jang JW, Yun CH. Influence of artificial intelligence and chatbots on research integrity and publication ethics. Sci Ed. 2024; 11(1):12–25.
54. Zhaksylyk A, Zimba O, Yessirkepov M, Kocyigit BF. Research integrity: where we are and where we are heading. J Korean Med Sci. 2023; 38(47):e405. PMID: 38050915.
Go to : Goto
TOOLS
Similar articles