Journal List > J Korean Assoc Oral Maxillofac Surg > v.49(4) > 1516083599

Kleebayoon and Wiwanitkit: ChatGPT and scientific paper
Dear Editor,
We found the article1 “Could ChatGPT help you to write your next scientific paper?: concerns on research ethics related to usage of artificial intelligence tools” of interest. Park1 advises authors and readers to use the large language model (LLM) Chatbot carefully and assure that the tool could provide a clear result according tot he intention. Additionally, scientists should refrain from including any artificial intelligence (AI) tools in the author list. Regarding the future application of LLM tools in scientific writing, Journal of the Korean Association of Oral and Maxillofacial Surgeons aims to change its stance and maintain an open mind1.
Basically, sensitive material should not be created, modified, or approved by AI without human review2. There are various other challenges that have been brought up in addition to the ethical considerations regarding the use of AI technologies like ChatGPT in scientific writing. The possibility that an AI tool will produce subpar or even malicious information is a big cause for concern. For instance, ChatGPT can create study abstracts that are difficult for scientists to discern from abstracts written by people, but such writing can also produce spam or contain misleading information. Due to the lack of reliable screening, problem might occur and this could compromise the integrity of published work. ChatGPT requires a lot of data to respond to and learn from. The results of ChatGPT show that these datasets may contain presumptions or assumptions that ultimately turn out to be incorrect. AI should not be used to create, change, or approve sensitive content without human oversight2. As a result, the ChatGPT user might be given inaccurate or misleading information. The use of AI chatbots in academic research must also consider any potential ethical concerns. A thorough investigation should cover all relevant issues, including authorship attribution, intellectual property rights, and any biases in the data or algorithms.
A number of high-impact journal publishers have begun disclosing their guidelines for the use of AI tools in submitted articles in an effort to allay these worries. Some have made it clear that they will not accept works that include ChatGPT or any other AI program as authors. However, LLM techniques can be disclosed in the methods or acknowledgement sections of manuscripts submitted to Nature. Some might allow the use of AI technologies to enhance the readability and language of research articles, but emphasize that crucial duties like evaluating data and coming to scientific conclusions should still be carried out by the authors.
Conclusively, it is a challenge to have a good corresponding practice towards the emerging new AI technologies. Due to the rapid change, it is necessary to have a continuous monitoring and proper adjustment.

Notes

Authors’ Contributions

A.K. participated in 50% ideas, writing, analyzing, and approval. V.W. participated in 50% ideas, supervision, and approval.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

1. Park JY. 2023; Could ChatGPT help you to write your next scientific paper?: concerns on research ethics related to usage of artificial intelligence tools. J Korean Assoc Oral Maxillofac Surg. 49:105–6. https://doi.org/10.5125/jkaoms.2023.49.3.105. DOI: 10.5125/jkaoms.2023.49.3.105. PMID: 37394928. PMCID: PMC10318315.
crossref
2. Kleebayoon A, Wiwanitkit V. 2023; Artificial intelligence, chatbots, plagiarism and basic honesty: comment. Cell Mol Bioeng. 16:173–4. https://doi.org/10.1007/s12195-023-00759-x. DOI: 10.1007/s12195-023-00759-x. PMID: 37096073. PMCID: PMC10121937.
crossref
TOOLS
Similar articles