Journal List > Healthc Inform Res > v.31(3) > 1516092142

Ramírez López and Mora: How New Chatbots Can Support Personalized Medicine

Abstract

Objectives

This study proposes the integration of chatbots into personalized medicine by demonstrating how these tools can support the personalized medicine model. Chatbots can deliver tailored health recommendations, facilitate patient-doctor communication, and provide decision support in clinical settings. The goal is to establish a reference framework aligned with national and international standards for personalized healthcare solutions.

Methods

The chatbot model was developed by reviewing 30 scientific and academic articles focused on artificial intelligence and natural language processing in healthcare. The study analyzed the capabilities of existing healthcare chatbots, particularly their capacity to support personalized medicine through accurate data collection and processing of individual health information.

Results

Key parameters identified for effective chatbot deployment in personalized medicine include user engagement, data accuracy, adaptability, and regulatory compliance. The study established a compliance benchmark of 25% based on current industry standards and application performance. The results indicate that the proposed chatbot model significantly increased the precision and efficacy of personalized medical recommendations, surpassing baseline requirements set by standardization organizations.

Conclusions

This model provides healthcare professionals and patients with a robust framework for utilizing chatbots in personalized medicine, focusing on improved patient outcomes and engagement. The research identifies a gap in the application of artificial intelligence-driven tools in personalized healthcare and suggests strategic directions for future innovations. Implementing this model aims to bridge this gap, offering a standardized approach to developing chatbots that support personalized medicine.

I. Introduction

Chatbots, which are computer programs designed to interact with users through messaging apps, chat windows, or voice interfaces, have significantly evolved since their inception over 50 years ago [1]. One of the earliest examples, ELIZA, developed in 1966, simulated a therapist using simple pattern matching to respond to typed questions. Contemporary chatbots leverage advanced techniques to better understand user queries and provide relevant responses, often performing functions traditionally managed by mobile apps or websites [2]. This evolution positions chatbots as critical tools across various fields, including healthcare, where they hold immense potential to support personalized medicine models.
In the realm of personalized medicine, chatbots have emerged as transformative tools capable of delivering tailored health recommendations, enhancing patient engagement, and integrating seamlessly with healthcare systems to provide real-time, data-driven insights. However, the effectiveness of these chatbots depends on a thorough understanding of available models, as well as their strengths, weaknesses, and limitations. Chatbots can be broadly classified into three categories: rule-based systems, machine learning-based systems, and advanced artificial intelligence (AI)-driven systems.
  • (1) Rule-based chatbots: These systems operate on predefined rules and decision trees, making them simple to implement and predictable. However, their rigidity limits their ability to manage complex or unexpected queries, making them suitable only for straightforward, repetitive tasks. This lack of flexibility poses significant challenges in personalized medicine, where nuanced interactions are crucial.

  • (2) Machine learning-based chatbots: These chatbots use algorithms to learn from data, enhancing their responses over time. They offer greater adaptability and the ability to handle a wider range of queries compared to rule-based systems. Nonetheless, they require extensive datasets for training and can inherit biases present in the data. Their performance heavily depends on the quality and diversity of the training data.

  • (3) Advanced AI-driven chatbots: Leveraging advanced technologies such as natural language processing (NLP) and transformers, these chatbots excel at understanding and generating human-like responses. They effectively manage complex interactions and integrate with external application programming interfaces (APIs) and services to deliver real-time information.

However, their development and deployment are resource-intensive, and they may struggle to maintain contextual coherence over extended conversations [3]. Understanding these strengths and limitations is critical for developing effective chatbots in personalized medicine. Additionally, this research proposes integrating the chatbot into the welcome page of the TIGUM Research Group’s website to ensure prompt responses to new users’ queries, thereby improving accessibility and user experience [4]. The core functionality of a chatbot lies in managing dialogues and responses while integrating external services through APIs or servers to retrieve information. Two prominent tools for chatbot development are the Microsoft Bot Framework and ActiveChat.ai.
The Microsoft Bot Framework, combined with Azure Bot Service, provides a comprehensive suite for developing, testing, deploying, and managing intelligent bots. Its modular SDK, along with AI services, enables the creation of bots capable of speech recognition, natural language understanding, and question-answering.
Meanwhile, ActiveChat.ai is a visual platform focused on conversational design, providing tools for seamless business integration. Its intuitive “LEGO for chatbots” approach makes it accessible to beginners while robust enough to address complex business needs [5].
Additionally, as noted by Jiang et al. [6], these two tools differ significantly in their approaches to chatbot implementation. The Microsoft Bot Framework is a development framework that relies on programming languages such as NodeJS or C#, requiring specialized technical knowledge. While it offers greater functionality and customization compared to ActiveChat.ai, its complexity can complicate future updates and evolution. In contrast, ActiveChat.ai is a cloud-based solution emphasizing visual conversation design, making it more accessible for users without extensive programming expertise, yet still capable of supporting advanced business objectives.
Within the context of the TIGUM project, the specific requirements and anticipated evolution of the chatbot’s beta version were carefully assessed. A comparative table was developed, assigning an importance percentage to each criterion and rating each solution on a scale from 1 to 3 [7]. This evaluation ensures that the selected tool aligns with the project’s technical needs and long-term objectives. For instance, a user’s intent (action) might be to book a hotel (book.Hotel), accompanied by entities (parameters) such as destination, hotel chain, and check-in and check-out dates. This information is transmitted back in JSON format, which the chatbot processes to fulfill the user’s request [8]. Dialogflow emerged as the optimal solution, though it is not the most critical initial choice based on the project’s primary requirements, as switching engines in the future remains relatively straightforward.
  • User interface: Messenger app and TIGUM website

  • Motor Bot: ActiveChat.ai cloud solution

  • NLP Engine: Dialogflow solution

To support chatbot functionality within the TIGUM project, a Frequently Asked Questions (FAQ) database was developed using QnA Maker, a cloud-based service provided by Microsoft Azure Cognitive Services. This solution allows the creation of a knowledge base through the entry of question types, titles, and corresponding answers. Upon receiving a user query, the tool’s API retrieves and provides the most relevant response, ensuring accurate and efficient information delivery. For weather integration, the OpenWeather API was employed. A free API key was generated to retrieve weather data by city, providing detailed information in JSON format. This integration allows the chatbot to deliver real-time weather updates, significantly enhancing user experience. To streamline data processing, two Azure Functions were developed to serve as gateways between the chatbot and external web services. These functions extract relevant data from JSON responses and reformat it into a simplified structure for smooth integration with the ActiveChat.ai platform. This approach optimizes data flow and ensures effective communication between the chatbot and external APIs [9]. Technological advancements have improved smartphones’ processing, storage, and connectivity, optimizing application development and resource consumption (see Figure 1).
Azure Functions offers an efficient solution for executing small segments of code (“functions”) in a cloud environment. This tool enables developers to concentrate on coding without the need to manage complete applications or infrastructure, thereby enhancing productivity. Azure Functions supports various programming languages, including C#, Java, JavaScript, PowerShell, and Python, and operates under a pay-per-use model, optimizing cost efficiency. Additionally, it scales automatically as demand increases, facilitating serverless application development within the Microsoft Azure ecosystem [5].
Azure Functions, a vital component in the development of the TIGUM project chatbot, provides exceptional features that enhance deployment and efficiency:
  • Language flexibility: Supports multiple programming languages, including C#, Java, JavaScript, and Python, enabling developers to select the most suitable language for their needs.

  • Pay-per-use model: Costs are incurred only during actual code execution, optimizing the project’s economic resources.

  • Customizable dependencies: Supports NuGet and NPM, facilitating easy integration of external libraries.

  • Built-in security: HTTP-activated functions can be secured through OAuth providers such as Azure Active Directory, Google, and Microsoft, protecting sensitive data effectively.

  • Simplified integration: Easily integrates with Azure services and SaaS platforms, crucial for connecting the chatbot to external APIs such as OpenWeather and QnA Maker.

  • Flexible development environment: Enables function development directly within the Azure portal or through continuous integration tools such as GitHub and Azure DevOps.

  • Open source runtime: Azure Functions’ runtime is open-source and available on GitHub, fostering transparency and collaboration [5].

These features have played an instrumental role in ensuring scalability, security, and operational efficiency for the chatbot within the TIGUM project, aligning closely with goals related to resource optimization and continuous improvement (see Figure 2).
Two key functions developed in the TIGUM project are:
  • (1) GetCurrentWeather: Invoked by the getWeather dialog in ActiveChat, this function interfaces with the OpenWeather API to retrieve weather data, returning it in a streamlined

  • (2) GetQnaAnswer: Invoked by the FaqAnswer dialog in ActiveChat, this function communicates with the QnA Maker API to fetch answers from the FAQ database, delivering responses in a simplified JSON structure.

These functions illustrate how the TIGUM project effectively integrates external services to enhance chatbot functionality, improving information delivery and user experience.

II. Methods

The study was conducted using a structured approach to review and analyze chatbots, specifically emphasizing the integration of NLP technologies and cloud services to enhance functionality and user experience [10,11]. Integrating QnA Maker into a chatbot involves the following essential steps [12]:

(1) Phase 1: Building the Knowledge Base (FAQ)

QnA Maker, a cloud-based NLP service provided by Microsoft Azure, was utilized to construct a knowledge base enabling the chatbot to respond to FAQ.
The steps included:
  • 1. Creating a QnA Maker resource in the Azure portal.

  • 2. Building the knowledge base by adding files and URLs.

  • 3. Publishing and testing the knowledge base using tools such as cURL or Postman.

  • 4. Programmatically integrating the knowledge base endpoint into the application to return responses in JSON format.

(2) Phase 2: Chatbot Engine Development

ActiveChat.ai, a visual chatbot development platform, was employed to design and manage chatbot interactions.
ActiveChat.ai allows the creation of “skills” representing specific chatbot actions, such as answering FAQs, displaying menus, or processing reservations. These skills are activated by specific events, including initial user interactions (_START) or text submissions (_DEFAULT). Additionally, integration with Dialogflow facilitates natural language processing, enabling the chatbot to interpret user intents and entities effectively.

(3) Phase 3: External Services Integration

Two key functions were developed in Azure Functions to facilitate interaction between the chatbot and external services:
  • GetCurrentWeather: Obtains weather data from the OpenWeather API and returns the data in a simplified JSON format.

  • GetQnaAnswer: Retrieves answers from the QnA Maker knowledge base and returns them in JSON format.

The FAQTigum knowledge base was established using a Microsoft account linked to an Azure resource [13]. QnA Maker imports content, processes user queries, and supplies the most suitable responses. ActiveChat.ai, with its visual “LEGO for chatbots” approach, simplifies conversational design and business integration, making it user-friendly yet robust enough to meet complex business requirements [13]. Figure 3 illustrates the distribution of total, free, and paid apps in various Google Play categories as of 2021.
According to mobile application reports using the LEGO for ChatBots Analytics Software Development Kit [13], developers are encouraged to divide complex bots into manageable “skills.” Each skill comprises a sequence of chatbot actions aimed at achieving a specific goal.
ActiveChat.ai serves as the core component, named “Motor bot,” managing all chatbot dialogues and responses. It also supports integration with external services (APIs or servers) for retrieving information, as shown in the chatbot architecture (see Figure 4)

1. Project Application

As described in a previous study [14], prebuilt skills initiate when new users begin interactions, while the default skill activates upon receiving any text from the user, subsequently calling Dialogflow.
  • Custom skills explain chatbot functionalities, including FAQ and Weather services. Dialogflow skills correspond to intents defined in Dialogflow, triggering after the default skill detects relevant intents.

  • API skill triggered after menu_weather or _DF_getweather skills, to call the openWeather API.

Given the frequent lack of comprehensive regulatory frameworks for telemedicine services [15], traditional computer interfaces often require precise and structured inputs, which can feel unnatural and challenging. Users may struggle with understanding the required input format, resulting in frustration. Ideally, interfaces should intuitively interpret users’ intentions based on natural language.
Developing conversational interfaces can be challenging, as even simple queries demand robust language parsing capabilities. Dialogflow addresses this challenge by delivering high-quality conversational experiences [1618].
A Dialogflow agent acts as a virtual assistant, managing conversations by interpreting human input through natural language understanding. It converts user text or audio into structured data that applications and services can process. The agent is trained to handle specific conversation scenarios required by the system [19].
Similar to human call center agents, Dialogflow agents manage anticipated conversational scenarios without detailed instructions [20]. The dynamic nature of applications can lead to impacts from brand usage and updates [21], as detailed in the analyzed applications and their verification parameters [22].
An intent represents the user’s goal within a single interaction. Dialogflow employs intent classification to align user input with the most relevant intent [23,24].
Dialogflow includes predefined system entities for common data types like dates, times, colors, and email addresses, and supports creating custom entities for specialized needs, such as vegetable entities for grocery store agents [25]. It uses contexts for natural language management, akin to human conversations, enabling interpretation of ambiguous expressions by leveraging contextual cues [26].
Contexts guide conversation flow through input and output contexts identified by unique string names [27]. Upon intent matching, associated output contexts activate, and Dialogflow matches subsequent intents based on active input contexts [28]. Further chatbot training, particularly developing responses, represents the next critical phase. Training in these key areas is vital to finalize the chatbot’s beta version [29].
An initial requirement is integrating the chatbot with the TIGUM website. ActiveChat provides a configurable chat widget, requiring custom server-side code, which was not tested due to constraints. Alternatively, the widget can be configured through a WordPress plugin.
The project’s key focus is connecting the chatbot to the TIGUM database by developing a REST API-based service on the TIGUM server, allowing chatbot data retrieval.
The existing architecture supports this integration approach, drawing on API examples from OpenWeather and QnAMaker. The server-side service can be implemented in C# or similar languages, resembling the Azure Function managing communication between APIs and ActiveChat.

III. Results

After a detailed analysis of each evaluated chatbot, a compliance percentage range was established. This indicates the proportion of chatbots meeting the requirements defined by the 17 selected validation items, as illustrated in Figure 5.

1. Proposed Validation Model for Chatbots

As a result of the analysis and comprehensive theoretical investigation and information processing, Table 1 presents the proposed reference parameters for future chatbot validation. These parameters consider at least the following general characteristics, derived from evaluating the assessed chatbots:
Following parameter refinement, a minimum compliance percentage of 25% was established as the foundational threshold for the validation model’s parameters. This figure accounts for the number of evaluated chatbots, recognizing that certain parameters are inherently managed and covered by app platforms.

2. Outcome of the Reference Model for Chatbot Validation

The validation model depicted in Figure 6 is based on the refined parameters from Table 1, developed by analyzing essential verification processes for ensuring chatbot security and accuracy within personalized medicine. This model outlines the critical characteristics and parameter flow or requirements necessary for chatbot compliance and proper functionality. It serves as a crucial component in the validation and approval framework, guiding the development of enhanced chatbot applications [30].
These results address previously identified issues concerning chatbot functionality, aligning with the Pan American Health Organization’s strategy to deliver accurate and secure electronic health information, thereby enhancing health service quality, research, education, and knowledge dissemination.

IV. Discussion

The results underscore the necessity of involving stakeholders, including developers and technical implementers, in managing technological transitions for chatbot applications in personalized medicine. This collaborative approach effectively addresses key challenges related to platform development and AI-driven solution implementation, ensuring chatbot applications fulfill personalized healthcare requirements [21].
The study identifies a significant disparity between AI technologies’ technical potential and their actual real-world outcomes. This gap arises partly because end-users, such as healthcare providers and patients, often lack the technical proficiency necessary to adopt and utilize these technologies effectively [23]. Such limitations underscore the importance of training and educational initiatives for successful integration.
Although the proposed model emphasizes collaboration among developers, healthcare providers, and patients, achieving practical harmony among these groups can be challenging. Ineffective stakeholder communication may delay implementation processes and diminish chatbot effectiveness in clinical environments.
This study aimed to develop a reference framework for effectively implementing chatbots in personalized medicine, and the results affirm the importance of adopting a collaborative and user-centric approach. To bridge the gap between technological potential and practical outcomes, it is vital to: train end-users, foster collaboration, and expand research.
Furthermore, the results and proposed validation model indicate that developing chatbot applications through collaborative efforts involving all stakeholders is the most effective strategy for creating efficient technological tools in personalized medicine. Aligning developers, healthcare providers, and patient contributions ensures AI-driven chatbots provide seamless and impactful personalized healthcare experiences [29].
In conclusion, integrating chatbots into personalized medicine represents a significant advancement in healthcare delivery. By utilizing individual patient data, these AI-driven tools deliver personalized recommendations and support, enhancing treatment precision and effectiveness. This aligns with the increasing emphasis on patient-centered care, demonstrating the potential for chatbots to revolutionize traditional healthcare models by making them more accessible, efficient, and responsive to individual patient needs.
Despite the advanced capabilities of chatbot technology, a notable gap persists in practical application, primarily due to limited technical skills among users. To address this gap, it is crucial to adopt user-centric design principles and provide comprehensive training and resources to both patients and healthcare providers. Empowering users in this manner will maximize chatbot adoption and impact, ensuring they fulfill their promise to improve healthcare outcomes.
The success of chatbot applications in personalized medicine relies significantly on the collaboration of all relevant stakeholders, including developers, healthcare providers, and patients. The proposed validation model emphasizes the importance of this multidisciplinary approach, ensuring chatbot solutions are developed to address the complex requirements of personalized medicine while aligning with clinical objectives and user expectations. Such collaboration promotes innovation and translates technological advancements into tangible real-world benefits.
Realizing the full potential of chatbots in personalized medicine requires ongoing research and development. Future initiatives should focus on refining algorithms, advancing natural language processing capabilities, and broadening personalized healthcare interventions’ scope. Collaborations among academic institutions, industry leaders, and regulatory bodies will be crucial to drive innovation, ensure patient safety, and effectively integrate chatbot technologies into healthcare systems. This ongoing advancement will position chatbots as essential tools in future healthcare delivery.

Notes

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Acknowledgments

The authors would like to thank the special support of the student Martin Beguinot of the CPE Lyon (France) under the supervision of the engineer Leonardo Ramirez. To the Military University Nueva Granada for the support of the project INV-ING-3948 registered, which includes PICING- 3887.

References

1. Chhalkar SR. Can generative AI truly transform healthcare into a more personalized experience? [Internet]. Manchester, UK: News-Medical.Net;2024. [cited at 2024 Oct 9]. Available from: https://www.news-medical.net/news/20240402/Can-generative-AI-truly-transform-healthcare-into-a-more-personalized-experience.aspx.
2. Laboratorios Rubio. The role of artificial intelligence in personalized medicine [Internet]. Barcelona, Spain: Laboratorios Rubio;c2024. [cited at 2024 Oct 9]. Available from: https://www.laboratoriosrubio.com/en/ai-personalized-medicine/.
3. Karim S. How AI can lead to personalized medicine [Internet]. Malvern (PA): Insurance Thought Leadership;2023. [cited at 2024 Oct 9]. Available from: https://www.insurancethoughtleadership.com/life-health/how-ai-can-lead-personalized-medicine.
4. HGF. AI-powered personalised medicine [Internet]. Aberdeen, Canada: HGF;2023. [cited at 2024 Oct 9]. Available from: https://www.hgf.com/blogs/ai-powered-personalised-medicine/.
5. Topol M. Deep medicine: how artificial intelligence can make healthcare human again. New York (NY): Basic Books;2019.
6. Li LR, Du B, Liu HQ, Chen C. Artificial intelligence for personalized medicine in thyroid cancer: current status and future perspectives. Front Oncol. 2021; 10:604051. https://doi.org/10.3389/fonc.2020.604051.
crossref
7. Wiens J, Shenoy ES. Machine learning for healthcare: on the verge of a major shift in healthcare epidemiology. Clin Infect Dis. 2018; 66(1):149–53. https://doi.org/10.1093/cid/cix731.
crossref
8. Esteva A, Robicquet A, Ramsundar B, Kuleshov V, De-Pristo M, Chou K, et al. A guide to deep learning in healthcare. Nat Med. 2019; 25(1):24–9. https://doi.org/10.1038/s41591-018-0316-z.
crossref
9. Zou J, Huss M, Abid A, Mohammadi P, Torkamani A, Telenti A. A primer on deep learning in genomics. Nat Genet. 2019; 51(1):12–8. https://doi.org/10.1038/s41588-018-0295-5.
crossref
10. Obermeyer Z, Emanuel EJ. Predicting the future: big data, machine learning, and clinical medicine. N Engl J Med. 2016; 375(13):1216–9. https://doi.org/10.1056/NEJMp1606181.
crossref
11. Nestler EJ, Waxman SG. Resilience to stress and resilience to pain: lessons from molecular neurobiology and genetics. Trends Mol Med. 2020; 26(10):924–35. https://doi.org/10.1016/j.molmed.2020.03.007.
crossref
12. Beam AL, Kohane IS. Big data and machine learning in health care. JAMA. 2018; 319(13):1317–8. https://doi.org/10.1001/jama.2017.18391.
crossref
13. Duong D, Solomon BD. Artificial intelligence in clinical genetics. Eur J Hum Genet. 2025; 33(3):281–8. https://doi.org/10.1038/s41431-024-01782-w.
crossref
14. Jiang L, Wu Z, Xu X, Zhan Y, Jin X, Wang L, et al. Opportunities and challenges of artificial intelligence in the medical field: current application, emerging problems, and problem-solving strategies. J Int Med Res. 2021; 49(3):3000605211000157. https://doi.org/10.1177/03000605211000157.
crossref
15. Hamilton AJ, Strauss AT, Martinez DA, Hinson JS, Levin S, Lin G, et al. Machine learning and artificial intelligence: applications in healthcare epidemiology. Antimicrob Steward Healthc Epidemiol. 2021; 1(1):e28. https://doi.org/10.1017/ash.2021.192.
crossref
16. McTear M. Conversational AI: dialogue systems, conversational agents, and chatbots. Cham, Switzerland: Springer;2022. https://doi.org/10.1007/978-3-031-02176-3.
17. Brabra H, Báez M, Benatallah B, Gaaloul W, Bouguelia S, Zamanirad S. Dialogue management in conversational systems: a review of approaches, challenges, and opportunities. IEEE Trans Cogn Dev Syst. 2021; 14(3):783–98. https://doi.org/10.1109/TCDS.2021.3086565.
crossref
18. Motger Q, Franch X, Marco J. Software-based dialogue systems: survey, taxonomy, and challenges. ACM Computing Surveys. 2022; 55(5):91. https://doi.org/10.1145/3527450.
crossref
19. Pereira R, Lima C, Reis A, Pinto T, Barroso J. Review of platforms and frameworks for building virtual assistants. Rocha A, Adeli H, Dzemyda G, Moreira F, Colla V, editors. Information systems and technologies. Cham, Switzerland: Springer;2024. p. 105–14. https://doi.org/10.1007/978-3-031-45648-0_11.
crossref
20. Ferreira P. Automatic dialog flow extraction and guidance. In : Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop; 2023 May 2–4; Dubrovnik, Croatia. p. 112–22. https://doi.org/10.18653/v1/2023.eacl-srw.12.
crossref
21. Zhang JZ, Chang CW. Consumer dynamics: theories, methods, and emerging directions. J Acad Mark Sci. 2021; 49(1):166–96. https://doi.org/10.1007/s11747-020-00720-8.
crossref
22. Kim SJ, Wang RJ, Malthouse EC. The effects of adopting and using a brand’s mobile application on customers’ subsequent purchase behavior. J Interact Market. 2015; 31(1):28–41. https://doi.org/10.1016/j.intmar.2015.05.004.
crossref
23. Hefny AH, Dafoulas GA, Ismail MA. Intent classification for a management conversational assistant. In : Proceedings of 2020, 15th International Conference on Computer Engineering and Systems (ICCES); 2020 Dec 15–16; Cairo, Egypt. p. 1–6. https://doi.org/10.1109/ICCES51560.2020.9334685.
crossref
24. Jung S, Lee C, Kim K, Lee D, Lee GG. Hybrid user intention modeling to diversify dialog simulations. Comput Speech Lang. 2011; 25(2):307–26. https://doi.org/10.1016/j.csl.2010.06.002.
crossref
25. Shaon HK. Development of a cloud based interactive query reply system for an enterprise: a case study [dissertation]. Dhaka, Bangladesh: Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology;2020.
26. Miller R. Integrating natural language processing for sophisticated semantic parsing and context management in dialogue systems. J Comput Technol Softw. 2024; 3(4):68.
27. Sack W. Conversation map: an interface for very large-scale conversations. J Manag Inf Syst. 2000; 17(3):73–92. https://doi.org/10.1080/07421222.2000.11045652.
crossref
28. Izadi S, Forouzanfar M. Error correction and adaptation in conversational AI: a review of techniques and applications in chatbots. AI. 2024; 5(2):803–41. https://doi.org/10.3390/ai5020041.
crossref
29. Sasubilli G, Kumar A. In : Machine learning and big data implementation on health care data. Proceedings of 2020, 4th International Conference on Intelligent Computing and Control Systems (ICICCS); 2020 May 13–15; Madurai, India. p. 859–64. https://doi.org/10.1109/ICICCS48265.2020.9120906.
30. Skuridin A, Wynn M. Chatbot design and implementation: towards an operational model for chatbots. Information. 2024; 15(4):226. https://doi.org/10.3390/info15040226.
crossref

Figure 1
Azure Function (tigumbotservice) resources.
hir-2025-31-3-245f1.gif
Figure 2
Knowledge base example.
hir-2025-31-3-245f2.gif
Figure 3
Prebuilt skills.
hir-2025-31-3-245f3.gif
Figure 4
Project skills.
hir-2025-31-3-245f4.gif
Figure 5
The getWeather intent training phrases.
hir-2025-31-3-245f5.gif
Figure 6
Widget integration via code.
hir-2025-31-3-245f6.gif
Table 1
Parameters for the reference model
Action
Step 1 The client application sends the user’s question (text in their own words), “How do I programmatically update my Knowledge Base?” to your knowledge base endpoint.
Step 2 QnA Maker uses the trained knowledge base to provide the correct answer and any follow-up prompts that can be used to refine the search for the best answer. QnA Maker returns a JSON-formatted response.
Step 3 The client application uses the JSON response to make decisions about how to continue the conversation. These decisions can include showing the top answer or presenting more choices to refine the search for the best answer.
TOOLS
Similar articles