In 2023, the introduction of large language models (LLMs) brought about significant developments in the medical field. Academic conferences included special sessions on Chat-GPT, which sparked debates on the ethics of authorship concerning papers generated by ChatGPT. Additionally, the Korean Society of Medical Informatics organized a Health and Medical Big Data Forum in December 2023, with support from the Ministry of Health and Welfare, to examine the wide-ranging impacts of these models on healthcare from various perspectives.
In Part I, the following presentations were given:
Dr. Jun Ho Lim, who is affiliated with the Electronics and Telecommunications Research Institute, delivered a presentation on LLMs. He explained how LLMs’ operational approach diverges from that of traditional machine learning. However, when asked whether LLMs truly replicate human intelligence, he noted that although LLMs can imitate human intelligence to a degree, it is challenging to regard them as truly understanding human knowledge because they depend on probability-based predictions to determine the next word. Furthermore, he noted that training LLMs on insufficient data could result in hallucination issues.
The second speaker, Professor Hyung Jun Joo from Korea University, who was the first in Korea to develop medical BERT, delivered a presentation. He discussed the challenges of using ChatGPT in healthcare from a clinical perspective. Initially, he addressed the medical community’s reaction to ChatGPT. He noted that upon its introduction in 2023, one could gauge the general attitude of medical professionals by reviewing comments from various doctors. They highlighted the critical role of empathy in the doctor-patient relationship and expressed doubts about the ability of machines to replace human doctors. Furthermore, he cited an instance where ChatGPT, having recently passed the US medical licensing exam, was tested on Korean medical exam questions. This exercise revealed that while ChatGPT could initially find answers, its performance was significantly affected by the nature of the prompts it received from users.
The third speaker, Anne Shin who is a senior executive from Microsoft, introduced various solutions related to OpenAI. Among these, she demonstrated a numerous features using Copilot, where simply inputting sentences can automatically generate a PowerPoint presentation. Additionally, she introduced a new Microsoft solution developed in collaboration with OpenAI, designed to improve privacy protection features, addressing the current weaknesses in OpenAI’s capabilities in this area.
As the final speaker, Professor Hyoun-Joong Kong from Seoul National University Hospital provided an overview of the development of artificial intelligence in medicine. He discussed how the advent of big data, along with advancements in search technology and machine learning algorithms, has led to the creation of artificial intelligence systems capable of analyzing large datasets efficiently. He further explained how various technological advancements in data science have converged to shape the artificial intelligence we use today. Professor Kong also highlighted recent examples demonstrating the effectiveness of artificial intelligence in clinical settings, particularly its efficient application during the COVID-19 pandemic. Additionally, he illustrated how it has been used to predict the side effects in patients participating in clinical trials.
During the panel discussion in Part II, Professor Seng Chan You from Yonsei University, who served as a panelist, mentioned that using ChatGPT could simplify the way patients understand their treatment details. He explored the potential impacts on patients by presenting experimental results, demonstrating how medical records authored by healthcare professionals could be transformed into versions that are more comprehensible to patients. He also discussed the potential for enhancing patient-centered services with the assistance of ChatGPT.
Furthermore, he pointed out the issue of ChatGPT frequently changing important decisions influenced by user prompts. Specifically, he cited examples where doctors might revise their initial decisions after reviewing preliminary results produced by ChatGPT, typically following inquiries about the necessity of specific tests. He voiced concerns that this pattern could progressively shift the parameters of proper decision-making, potentially increasing the likelihood of false positive outcomes. He observed that ChatGPT might recommend unnecessary tests during patient consultations.
Attorney Sang Tae Jeong, a partner at Yulchon Law Firm, addressed the legal challenges associated with the use of artificial intelligence in healthcare. He outlined three primary concerns: first, he examined whether services that utilize artificial intelligence might infringe on the rights of others. He highlighted cases involving the unauthorized collection of personal information, the gathering of trade secrets, the unauthorized export of trade secret data, and the unauthorized collection of publicly disclosed data under certain restrictions. Second, he discussed the protection of intellectual property generated by artificial intelligence. Third, he explored the responsibilities of healthcare professionals who depend on medical artificial intelligence. He noted that responsibility would be distributed between the developers of the artificial intelligence and the healthcare professionals themselves.
The Health and Medical Big Data Forum of 2023 provided an excellent opportunity to explore the applications of LLM in the medical field and to address the associated concerns. Healthcare represents a complex sector where data and diverse technologies converge. Introducing new technologies in this area has the potential to profoundly impact the entire healthcare industry. Therefore, it is necessary to understand and utilize the practical advantages of technology, while also addressing related concerns, in order to integrate this new technology safely.