Dr Schuman, section editor for Peds v2.0, is clinical assistant professor of Pediatrics, Geisel School of Medicine at Dartmouth, Lebanon, New Hampshire, and editorial advisory board member of Contemporary Pediatrics.
Adopting technologies with artificial intelligence (AI) will change patient care in many ways. Here’s where AI has been, where it is now, and what it holds for the future of pediatrics.
For many, the expression “artificial intelligence” (AI) conjures up images of a dystopian future in which humans are ruled by malevolent computers or androids. In our real-world, not-quite-dystopic lives, AI is responsible for driving autonomous vehicles, powering intelligent assistants such as Alexa and Siri, and placing annoying advertisements on the web pages we frequently view. Yet, AI is also improving many aspects of pediatric medicine, and in the not-too-distant future AI will dramatically change the way we practice.
AI, machine learning, and deep learning
Simply stated by Merriam-Webster, artificial intelligence is the “capability of a machine to imitate intelligent human behavior.” It is a generic term, and it is important to understand that computers-machines-can be programmed with a series of “if-then” statements that give the appearance of “intelligence.” A good example would be web- or software-based programs used to prepare taxes. There is nothing natively “intelligent” about these programs, but they accomplish something that a human-eg, an accountant-does routinely.
Machine learning (ML) is a subset of AI, with its programs utilizing algorithms to modify themselves by responding to inputted data (Figure 1). Such ML programs can be presented with labeled data and perform “supervised learning,” or be taught to extract data from unlabeled data, which is to perform “unsupervised” learning. Supervised ML can detect faces, identify objects in images, transcribe speech to text, and classify text as spam. Unsupervised ML can compare documents for keywords, detect anomalies in images, and predict changes in health status. Whereas ML programs are capable of some autonomy, human programmers need to modify code when errors occur.
Deep learning (DL) is a subset of ML that is dependent on the development of neural networks. These networks consist of layered sets of algorithms, modeled after the human brain, to recognize patterns within data. Thus, DL systems can modify their algorithms independent of human programming. The layers are made of computational nodes that determine which information should be passed on to subsequent nodes (Figure 2). The more data provided to DL systems, the better they become at doing what they were designed to do. Over the last 10 to 20 years, DL systems have evolved significantly. In the past they beat humans at chess, on the game show “Jeopardy,” and most recently at the Chinese game of Go, which requires many magnitudes of calculations more than chess.
IBM, creator of Watson, the AI system that communicates with users via human-like speech, has coined the alliterative term “cognitive computing” to encompass AI, ML, and DL. The term was adopted to give a human spin on the use of AI systems, representing IBM’s belief that Watson and its offspring will complement human judgement and experience rather than replace them. Other AI experts suggest replacing the term “artificial intelligence” with “augmented intelligence” to convey a similar message.
AI’s history in medicine and Pediatrics
One of the first AI programs with medical implications was the MYCIN project developed in the 1970s at Stanford University, Stanford, California. It was an expert system that queried a physician regarding patients with severe infections. The program delivered a list of possible causative bacteria and recommended antibiotics with the dosage based on the patient’s body weight. The program performed better than expert physicians but was never used in practice.
A more familiar example of AI currently used in medical practice is voice recognition/dictation software. James and Janet Baker founded Dragon Systems in 1982 to commercialize speech recognition software based on statistical predictive models. Today, the current incarnation of Dragon-Nuance (Burlington, Massachusetts)-boasts a vocabulary of 300,000 words and integrates vocabularies for 90 medical specialties. By integrating DL into the software, the software learns the nuances of one’s speech patterns and improves over time, achieving 99% accuracy.1
My first experience with AI in pediatric technology was in 2010 when I wrote about digital stethoscopes and a program from the former Zargis Medical called “CardioScan” that used DL to analyze recorded heart sounds and identify murmurs that should be investigated with an echocardiogram. This technology was called computer-assisted auscultation (CAA), and CardioScan performed much better than pediatricians in identifying potentially pathologic murmurs. The CAA technology is available today via a program called SensiCardiac (Diacoustic Medical Devices; Stellenbosch, South Africa). It has become popular in third-world countries where pediatric cardiologists are in short supply.2
Current state and future implications
Much research is ongoing in AI and healthcare, and many studies (Figure 3) have important implications for pediatric care.3 Despite the abundance of research that demonstrates how AI can improve healthcare, relatively few products/devices have been granted US Food and Drug Administration (FDA) approval. In April 2019, the FDA issued its recommendations for a new approval process for clearing medical devices that use AI algorithms to assist in diagnosis. According to the FDA, devices that utilize ML algorithms have the potential to adapt and optimize performance over time. To respond to this regulatory challenge, the FDA has proposed adoption of a “predetermined changed control plan” in all premarket submissions of medical devices that utilize ML algorithms. This would require that manufacturers provide periodic updates to the FDA as their devices and algorithms change over time.4
Cognitive computing is poised to change healthcare in many ways. It will impact our ability to expedite identification of biomarkers to more effectively treat cancers; facilitate mental health diagnoses; accelerate drug development; promote patient safety; predict how environmental change will affect short-term and long-term health; and much, much more.
The process from the inception of a healthcare DL algorithm to implementation can take years. First, studies need to demonstrate the efficacy of a healthcare algorithm, and once perfected these must undergo clinical validation before FDA approval becomes possible. It would take a much longer article to detail the science, math, and programming responsible for healthcare DL neural networks, but I will describe some recent developments in healthcare AI relevant to Pediatrics that are quite exciting.
In 2017, AliveCor (Mountain View, California) received FDA approval for marketing the KardioMobile portable device that records 1-lead electrocardiograms (EKGs) and detects atrial fibrillation via a Bluetooth connection to a smartphone. One year later, Apple received its own FDA approval and integrated atrial fibrillation detection into the Apple Watch series 4. Although atrial fibrillation is rare in Pediatrics, both devices can be used for recording brief EKGs in children with palpitations and sharing these with their pediatricians or pediatric cardiologists.
In the coming months, AliveCor is releasing an upgraded device that will perform a 6-lead EKG called the KardioMobile XL. In addition, the Zio XT patch (iRhythm Technologies; San Francisco, California) is an EKG monitoring system that has a 2-week storage capability. The device was developed using a deep neural network (DNN) and trained on a dataset of 91,232 EKG records from some 53,549 patients to accurately recognize 10 arrhythmia types including supraventricular tachycardia.5 The Zio XT patch is currently used by pediatric cardiologists when a traditional Holter monitor will not suffice.
There have been several studies indicating that DL algorithms are capable of reading and interpreting electroencephalograms (EEGs), but these have yet to be developed and integrated into routine practice.6
Several companies (Aidoc [Tel Aviv, Israel], Neural Analytics [Los Angeles, California], MaxQ-AI [Andover, Massachusetts], Viz.ai [San Francisco, California], and Imagen [New York, New York]) have received FDA approval for marketing adjuncts to radiology software, commonly referred to as “picture archiving and communication systems” (PACS). These expedite diagnoses of intracranial bleeding, multiple sclerosis, traumatic brain injury, pulmonary embolisms, and wrist fractures. There are even ultrasound systems now available for expediting stroke diagnosis for elderly patients by emergency medical technicians in the field. A study published just a few months ago showed that a DL algorithm could perform bone-age assessments better than radiologists.7
Artificial intelligence systems can be trained to recognize skin cancer, and a large study published last year demonstrated that AI can identify malignant melanoma as well as a panel of 58 dermatologists.8 As of yet there are no FDA-cleared products for skin cancer detection, but BlueScan Labs (San Francisco, California; www.bluescanlabs.com) is inviting clinicians to share images of suspect lesions with the company, with the intention of building a large-enough dataset to develop an accurate skin cancer detection system.
The FDA cleared the autonomous IDx-DR device (IDx Technologies; Coralville, Iowa) in 2018, which facilitates detection of retinopathy in diabetic patients aged 22 years and older, without an ophthalmologic examination. The system is intended for use in optometry and primary care offices. In addition, there are now telemedicine photo systems that enable detection of retinopathy of prematurity (ROP) in premature infants who are cared for in remote neonatal intensive care units that may not have access to pediatric ophthalmologists. A study published last year demonstrated that DL algorithms could be used to accurately screen for ROP via telemedicine.9
DeepGestalt is a community-driven phenotyping platform trained on a dataset of more than 17,000 images representing 200 syndromes. It has been shown to achieve an accuracy of 91% in syndrome identification.10 Pediatricians can subscribe to the Face2Gene project (Boston, Massachusetts; www.face2gene.com) and utilize its smartphone application to identify a patient’s syndrome while enlarging the project’s dataset.
CLINICAL DECISION SUPPORT
Earlier this year, American pediatricians collaborated with pediatricians in China to extract information from the Guangzhou Women and Children’s Medical Center electronic health record (EHR) system to develop a “clinical decision support system” (CDSS) tool. In total, 101.6 million data points from 1,362,559 EHRs were extracted from free-text EHR notes using natural language processing algorithms. When tested, the study’s CDSS tool bested “junior pediatricians” but not “senior pediatricians” in diagnosing cases of asthma, encephalitis, gastroenteritis, pneumonia, sinusitis, upper-respiratory infections, and psychiatric diseases. Overall, the system was able to produce accurate diagnosis 90% of the time and no worse than 79% of the time.11
We do not have a CDSS released for general pediatric use yet, but it is just a matter of time before CDSS tools are integrated into our EHRs. These can guide physicians to the most likely diagnosis, order the most cost-effective tests, and prescribe the least-expensive antibiotics while reducing medical errors.
Beware the ‘black box’
According to Wikipedia, a “black box” is a system that “can be viewed in terms of its inputs and outputs, without any knowledge of its internal workings.” Therefore, most AI applications in healthcare function as a “black box” system. That is, their implementation is considered “opaque” to those utilizing the system. There are many physicians who believe-as I do-that medicine is more art than science and cannot be reduced to “cookbook” algorithms. As DL systems will improve with use, the dilemma we face is explaining to colleagues and patients why DL-derived recommendations are made. Now more than ever there is a growing emphasis on changing DL systems to be more “transparent.”
This discussion of AI’s applications in Pediatrics should convince pediatricians that cognitive computing has the potential for improving pediatric practice. It is unlikely that computers will become “self-aware” and compete with pediatricians for patients. Pediatricians should keep an open mind to adopting AI technologies that will improve care while reducing the hassles associated with pediatric practice.
1. Schuman AJ. Speed EHR documentation with voice recognition software. Contemp Pediatr. 2014;31(6):28-32.
2. Schuman AJ. Electronic stethoscopes: What’s new for auscultation. Contemp Pediatr. 2015;32(2):37-40.
3. Kokol P, Zavrsnik J, Vosner HB. Artificial intelligence and pediatrics: a synthetic mini review. Pediatr Dimensions. 2017;2(4):1-5.
4. US Food and Drug Administration. Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-based Software as a Medical Device (SaMD). Discussion Paper and Request for Feedback. Available at: https://www.fda.gov/media/122535/download. Accessed June 12, 2019.
5. Hannun AY, Rajpurkar P, Haghpanahi M, et al. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat Med. 2019;25(1):65-69.
6. Craik A, He Y, Contreras-Vidal JL. Deep learning for electroencephalogram (EEG) classification tasks: a review. J Neural Eng. 2019; 16(3):031001.
7. Tajmir SH, Lee H, Shailam R, et al. Artificial intelligence-assisted interpretation of bone age radiographs improves accuracy and decreases variability. Skeletal Radiol. 2019;48(2);275-283.
8. Haenssle HA, Fink C, Schneiderbauer R, et al; Reader Study Level-I and Level-II Groups. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):1836-1842.
9. Wang J, Ju R, Chen Y, et al. Automated retinopathy of prematurity screening using deep neural networks. EBioMedicine. 2018;35:361-368.
10. Gurovich Y, Hanani Y, Bar O, et al. Identifying facial phenotypes of genetic disorders using deep learning. Nat Med. 2019;25(1):60-64.
11. Liang H, Tsui BY, Ni, H, et al. Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence. Nat Med. 2019;25(3):433-438.