The technology may be new, but the position we find ourselves in as clinicians is not.
Before electronic medical records (EMRs) became ubiquitous in the medical landscape, we were promised a digital health utopia by allowing more digitization into the clinic. By having our patients’ medical records converted to and recorded as digital versions, we were promised that the benefits to both providers and patients would be enormous. The promise of information sharing and interoperability of improved patient safety based on automated computer algorithms, and, of course, that the time saved with the ease of computer-based documentation and prescribing would be a monumental shift for work/life balance as pediatricians. The reality of its implementation, though, has not played out the way we all had hoped.
We are all too familiar with the outcome of implementing EMRs and the stark contrast from the promises and hopes at the outset. The intersection of this burgeoning computer technology and medicine relied on the heavy lifting of adapting how providers perform our daily clinical duties rather than the new technology needing to adapt to us as practicing pediatricians. We find ourselves as human extensions of the computer EMR, working to bend our workflows and highly trained clinical decision-making processes to better accommodate the all-powerful EMR. And rather than having providers at the forefront of making and improving the function of the EMR, it is often those with little to no medical training making the decisions regarding how to “improve” the system. It has left us with a record-collecting method primed for billing and lagging in patient care benefits, lots of clicks, cloud and server crashing frustrations, more clicks, lack of interoperability, and did I mention those clicks?
And yet here we are, on the verge of a new leap in digital technology set to change the medical landscape once again. From the daily news articles touting new fun uses of artificial intelligence (AI) for the general population to glints of AI testing the waters in the medical setting, proponents of AI in medicine are promoting the same ideas and making the same promises as the EMR proponents were a few decades ago. But this time it does feel and sound a bit different. Maybe this time it is different. We are crossing our collective fingers in hoping that this time it is different.
The overarching difference AI provides over previous “smart” medical and EMR technology is how the data are collected and interpreted. Previous computing analytics relied on structured data meticulously sorted and entered into spreadsheets and forms. Then this data would be computed through set equations and decision trees with little room for deviation from the original parameters. The computer program would then spit out the result for the user in a similarly organized form. AI and machine learning (ML), on the other hand, take this many steps further. AI and ML can process unstructured data providing a result that is not limited to a desired answer from a set equation or algorithm, but rather it can return entirely new information and observations, a function known as generative information. To make an analogy, old computing techniques were akin to an elementary student memorizing a text to answer questions about comprehension of the material (what was the protagonist’s name?), while new AI and ML would be a college graduate not only able to read the text and answer similar questions but also to write an essay comparing the protagonists from similar works noting key differences and similarities between them.
Knowing this, it may be easier to imagine and understand the immediate benefit of AI and ML in the research realm. With every leap in computing capabilities we see that the ability to crunch larger amounts of complex data faster yields proportionally large advances in science and medicine. The ability of current AI offerings to collect unstructured or unlabeled data will allow for larger and more varied datasets to answer our scientific questions. But the real leap in the scientific process that new AI technology allows is not in evaluating the questions at hand but in coming up with the right questions to ask in the first place.
By using such AI offerings from companies like Humata (humata.ai), we can load in datasets, scientific articles, and other important references in a particular field of study and the AI can comb through the data and come up with its own trends and findings that may warrant further questioning and inquiry. Public health researchers are already making gains in research for population and environmental health with AI tools to find better questions to ask in search of hidden connections in the ways that health and environment intersect, using AI to “see” the interplay previously thought invisible to the human eye and mind. The ability of AI and ML to find cause and effect in seemingly disparate datasets has the power to illuminate health trends and aberrations as targets for addressing population-based health care solutions. It is no less groundbreaking when compared to how complex computing years before shined the light for the path for genomics and the shift in medical research and treatment that went along with it.
But is AI ready to make the leap to the clinical world?
Nascent AI technology is already starting to be investigated/utilized for monitoring slight changes in electrocardiogram and lab values hoping to predict/prevent sepsis in the pediatric intensive care unit.1,2 Having a tool to explore more continuous data points than a human provider can fathom is an obvious boon to the field where seconds matter and slight changes to vital signs and lab values can signal probable patient deterioration. Pediatric emergency rooms have also started to test the AI use of ML in regard to the current mental health crisis. Using ML to triage and provide for timely and safe discharge of adolescents presenting with suicidal ideation, emergency rooms are hoping to identify those at highest risk for morbidity and mortality not based just on presenting symptoms, but also taking into account support network strength, home safety, and various other factors often too burdensome to fully synthesize in the emergency setting.3 They are making strides to safely usher those at highest risk into the inpatient care they need in a timely manner, while those noted as a lower risk based on similar datasets are safely discharged to at home care with community support and resources.
The amount of constant new data in an acute and inpatient setting leans toward ML and AI capabilities, but there are multiple examples of low-hanging fruit in the ambulatory and primary care pediatrics setting as well. With burnout among all providers at an all-time high, often citing the EMR and mundane nonclinical tasks as the reason for this sense of dismay among the field, what if we could leverage AI technology to excel at the burnout-inducing aspects of medical care such as filling out forms, clicking boxes, and medical records deep dives? With AI completing these tasks, we could then get the added functionality of ML to analyze this unstructured data throughout the medical record and possibly find clinical pearls of overlooked or not yet investigated trends, clinical signs, and early and/or missed symptoms.
Ambulatory clinics have begun to tiptoe cautiously into the AI waters by addressing some clinical care adjacent issues with new AI tools. For instance, clinics are using ML to look for trends in patients and families who are prone to missed appointments or poor follow-up.4 By analyzing the provider’s or clinic’s panel EMR information, the AI assigned with the task is noting trends that can determine how likely or unlikely a patient is to miss an appointment based on factors known and unknown to the clinical staff. Also, by noting previous recorded times of patient portal interaction or phone calls to clinic, the AI can determine the optimal time to send a visit reminder to those patients based on their history and patient messaging usage times.
With the ability for current AI technology to analyze unstructured data and perform generative functions, it has expectedly ventured into altering the way a provider interacts with the EMR beyond solely charting patient files. We are used to current EMR data collection as it is our main interaction with EMRs. We, health care providers, are currently the AI, the “actual intelligence,” typing and clicking away to make sure the EMR has the data it requires for its rigid and defined structure. Often this structure is antithetical to the provider’s workflow with displays of this raw and computed data often hidden through tabs and tabs of other data. With new AI there may be no need for filling or checking boxes, or picky algorithms reliant on pediatricians to sort and sift through patient data before entering it into the EMR on a silver platter. With a simple SOAP (subjective, objective, assessment, and plan) note, the EMR using machine learning will be able to naturally glean the unstructured information we would normally need to shuffle through and click on different tabs and flow sheets within the EMR window. It would then be able to enter orders, place referrals, and update team members on plans of care, all from a simple SOAP note. For those of us trying to unload the burden of charting using dictation or virtual/remote scribe technology, AI is transforming the digital scribe landscape with most current offerings touting their current or soon-to-be use of AI and machine learning, the goal of which is to allow the provider to run a free-form patient visit where the AI will sit and listen patiently and find what parts of the discussion with the patient should go into each part of the structured SOAP note, which box of the flow sheet, and which order in the chart.
This ability of ML to have enough computing power to allow for nuance would also aid in eliminating needless pop-ups from the EMR. We deal with the yes/no decision tree algorithms in the EMR every day as providers. Having your work stopped with an alert letting you know that there is a pregnancy warning when you are prescribing a medicine to a 7-year-old is irritating at best and at times downright disparaging. With new AI capabilities, the EMR would be able to look at the whole of the patient chart and generate its own nuanced conclusion based on all the provided data; it would be able to distinguish the unlikelihood of a 7-year-old being pregnant and nullify the pop-up warning, ultimately decreasing the burden on providers currently experienced in the current system that is based on a single EMR programmer’s decision tree algorithm.
With this plethora of opportunity for advancement of the provider and patient experience that AI may bring, the question we now need to ask ourselves is not just what can this technology do—because at this moment it does seem like the sky is the limit—but rather, what are we comfortable allowing AI and ML technology to do for us and, more importantly, our patient population?
How far should we allow AI to perform these generative tasks? Or more importantly, how comfortable are we allowing generative AI to intervene in direct patient care? These are the questions we must ask ourselves as those caring for children specifically. Automated tasks by AI may be tolerated in the adult patient population, but those same uses may not be as safe or worthwhile in the pediatric population. Would you be comfortable with an AI chatbot responding to general care questions through a patient portal such as recommending the correct ibuprofen dosing? Maybe so, as the AI can see the patient’s weight and age, and check medical history for any kidney disease, right? But what about responding to a message about a diaper rash? It does sound nice to have this constant inbox fodder taken care by an automated AI function. Rather than typing the same repetitive response for a diaper rash, generative AI can, with some minor tweaks based on the patient’s demographics and past history of rashes, answer the family’s questions and provide further guidance and monitoring parameters based on your previous recommendations for other families sending similar messages.
Let’s go even further. Would you allow a visual or graphic interface ML tool to make a rash diagnosis? What if AI could analyze that rash photo the family sent, correctly match it to its own database of thousands of diaper rash photos, and offer the correct guidance and OTC therapies that match your preferences for treatment and follow-up, all based on previous clinic visits for the same type of rash? Too good to be true or is it one step too far into the realm of a clinician? Well, it turns out Google already thinks it can do all that with more accuracy than a physician with DermAssist (https://health.google/consumers/dermassist/).
After touting the possible benefits of AI, we would be amiss not to delve into its glaring limitations. AI and ML run on data, but where are the data coming from that various AI platforms are using to generate information and inferences? After all, in ML, the machine can learn and propose only as well as the data it is given. As we know in medicine, much of the data we have used, historically, have been limited in terms of patient demographics and are lacking in terms of validated datasets and studies of minority populations. It has been well documented that these inherent biases have led to incorrect generalizations and marginalizing of communities in health care that we are just now trying to address. AI suffers from these same biases because of the information currently available from which it can learn.
A simple demonstration of this is by using the consumer-facing AI offering of ChatGPT. Most of the data used for ChatGPT’s dataset come from the Common Crawl, a collection of over 12 years of raw webpage data, metadata, and text extracts from websites starting in 2008. It does not have access to medical journal articles or raw research data. It is also important to note that the information used for ChatGPT ends at 2021, so those looking for ChatGPT or similar AI platforms to provide the most recent recommendations for treatment or diagnostics may end up with outdated information or practices that have been discontinued. The current learning model for ChatGPT is now mainly in the hands of feedback from users based on prompts/messages submitted to the system, again, allowing another place in the computational framework for personal preference to deter AI from making the most impartial generative data. One can see that if the main point of new learning is group thought/feedback, there can be entities that would possibly flood the AI platform with positive feedback for a certain brand of medication as a recommended treatment option and negatively rate generative data that does not have this brand-name medication, resulting in false recommendations or treatment options for the provider and the patient. ChatGPT understands these limits. So when asked to provide sources for a query on current recurrent urinary tract infection treatment options in pediatric patients, this is the result:
“I apologize for the confusion, but as an AI language model, I don’t have direct access to specific citations or external resources. I can provide general knowledge and information based on my training data. For specific medical advice, I recommend consulting with a qualified healthcare professional or referring to reputable medical literature and resources.”
With this brief discussion it is obvious that understanding the current capabilities and limitations of AI will be of utmost importance as pediatricians promoting child health and well-being now and in the future. And like with all new, exciting, and flashy “upgrades” to medicine, we do no favors to ourselves or our patients by sitting on the sidelines. As a collective, pediatricians and health care providers should be working hand in hand with AI and ML companies to promote ethical, equitable, and meaningful use of this technology for providers and patients alike, making sure these new instruments in the digital doctor’s bag are devices and algorithms used to augment the provider’s ability to care for the patient rather than a means to usurp or reprioritize the clinicians’ role in caring for the patient. And allow me to demonstrate the overarching goal for these advances: the ability to spend more meaningful time with patients away from the computer. Take it away, ChatGPT.
Following is an excerpt from an article derived from the prompt: “Write a 2,500-word article on the future of AI in the field of pediatrics with a focus on pediatric primary care.”
The future of AI in pediatric primary care holds immense promise for improving child health outcomes. From enhanced diagnostic accuracy to personalized medicine and remote monitoring, AI has the potential to transform the way pediatricians diagnose, treat, and care for their patients. However, it is essential to address the ethical considerations and challenges associated with AI implementation to ensure responsible and equitable use of this technology. By harnessing the power of AI while preserving the human touch, pediatric primary care can embrace a future where AI-driven innovations complement and enhance the care provided to the most vulnerable members of society—the children.”
1. Shah N, Arshad A, Mazer, MB, et al. The use of machine learning and artificial intelligence within pediatric critical care. Pediatr Res. 2023;93(2): 405-412. doi: 10.1038/s41390-022-02380-6
2. Choudhury A, Urena E. Artificial intelligence in NICU and PICU: a need for ecological validity, accountability, and human factors. Healthcare (Basel). 2022;10(5):952. doi:10.3390/healthcare10050952
3. Edgcomb JB, Tseng CH, Pan M, Klomhaus A, Zima BT. Assessing detection of children with suicide-related emergencies: evaluation and development of computable phenotyping approaches. JMIR Ment Health. 2023;10:e47084. doi:10.2196/47084
4. Liu D, Shin WY, Sprecher E, et al. Machine learning approaches to predicting no-shows in pediatric medical appointment. NPJ Digit Med. 2022;5(1):50. doi:10.1038/s41746-022-00594-w