Using ChatGPT in health care is concerning

News
Article
Contemporary PEDS JournalSeptember 2023
Volume 40
Issue 8

Donna Hallas, PhD, PPCNP-BC, CPNP, PMHS, FAANP, FAAN, shares her thoughts on the latest issue of Contemporary Pediatrics, including her concern with using ChatGPT for medical information and decisions.

Using ChatGPT in health care is concerning | Image Credit: © Daniel CHETRONI - © Daniel CHETRONI - stock.adobe.com.

Using ChatGPT in health care is concerning | Image Credit: © Daniel CHETRONI - © Daniel CHETRONI - stock.adobe.com.

Article highlights

  • Artificial intelligence (AI) is rapidly emerging in health care, including pediatrics.
  • Concerns exist about AI reliability in medical and nursing decisions.
  • Relying on AI can unintentionally harm patients.
  • Emphasis is on evidence-based practice in health care education.
  • Policies are needed to ensure AI's safe use in health care for all patients.

I always enjoy reading all the articles in Contemporary Pediatrics® annual tech issue. It is amazing how quickly machine learning (ML) and artificial intelligence (AI) have emerged within health care delivery systems. In an article by Matthew Fradkin , MD1, Using artificial intelligence in day-to-day practice raises many fascinating issues related to advances in AI in health care electronic medical records (EMR) and the analysis of data in pediatric intensive care units and emergency departments. However, with all the advances, there remain major concerns. One concern that I share with Fradkin is the use of ChapGPT in medical and nursing professions. Currently, ChatGPT is not a reliable source for nursing or medical decision making, thus raising the potential for harming not only our pediatric patients but all patients. Inexperienced nurses or nurse practitioners (NPs) or NPs who believe the source to be accurate and rely on ChatGPT for nursing and advanced practice decisions place their patients at high-risk of unintentional harm.

Educating nurses and nurse practitioners about ChatGPT

The education of undergraduate nursing students and graduate NP students is based on the principles of conducting an organized literature searches in reliable, high-quality data bases (i.e., CINHAL, Pubmed, Medline, etc) to obtain the best available evidence for care management. But what happens when a source, such as ChatGPT is used, and the material obtained is not evidence-based and consists of false information from non-existing references? The obvious result is that pediatric patients are at risk for treatment failures. This is not the outcome intended by any nurse or NP or any health care provider. Like physician education, nurses and NPs are educated “To do no harm.” Applying incorrect information in a care management plan for any patient has the potential to do serious harm.

For many diagnoses, we base our decisions on clinical practice guidelines (CPGs). As health care providers, we rely on CPGs as a trustworthy source for clinical practice. The evidence supporting CPGs is clearly presented using numerous research studies, analyzed, discussed, and debated by experts in the field, and disseminated with all supporting evidence and references. Even after receiving a new CPG, the implementation of the CPG should only be done after the CPG has been analyzed for use in specific clinical practice settings with respect to the patient populations served. For example, some CPGs are only for healthy children and cannot be used for children with chronic illnesses. If asked, would ChatGPT be able to analyze this critical information? Thus, asking ChatGPT to analyze a diagnosis and treatment recommendations in a CPG with the intention of using the information in clinical practice is very concerning. An individual may not be familiar with the topic and use the information produced by ChatGPT not knowing if the information is accurate or inaccurate.

Thoughtful policies must be developed to assure safety

Many individuals may not be aware of the unreliable data that may be in a ChatGPT’s response to questions raised. Thus, discussions must be undertaken in all medical and nursing professional organizations, practices, universities, and health care professional continuing education programs must be encouraged to establish policies. Opinions about the product and use in education differ. But collectively, we must consider all facets of AI products and especially ChatGPT. Policies must be developed that assure the safety of not only our pediatric patients but for all patient populations.

Click here for more from the September 2023 issue of Contemporary Pediatrics.

Reference:

1. Fradkin, M. Using artificial intelligence in day-to-day practice. Contemporary Pediatrics. 2023; 39(8):27-39.

© 2024 MJH Life Sciences

All rights reserved.