Warning: fopen(/home/virtual/pediatrics/journal/upload/ip_log/ip_log_2025-09.txt) [function.fopen]: failed to open stream: Permission denied in /home/virtual/pediatrics/journal/ip_info/view_data.php on line 93

Warning: fwrite(): supplied argument is not a valid stream resource in /home/virtual/pediatrics/journal/ip_info/view_data.php on line 94
Artificial intelligence in pediatric healthcare: bridging potential, clinical practice, and ethical considerations

Artificial intelligence in pediatric healthcare: bridging potential, clinical practice, and ethical considerations

Article information

Clin Exp Pediatr. 2025;68(9):652-655
Publication date (electronic) : 2025 August 28
doi : https://doi.org/10.3345/cep.2025.01844
1Department of Medicine, Kyung Hee University College of Medicine, Seoul, Korea
2Center for Digital Health, Medical Science Research Institute, Kyung Hee University Medical Center, Kyung Hee University College of Medicine, Seoul, Korea
3Department of Pediatrics, Kyung Hee University Medical Center, Kyung Hee University College of Medicine, Seoul, Korea
Corresponding author: Dong Keon Yon, MD, PhD. Department of Pediatrics, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Korea Email: yonkkang@gmail.com
*

These authors contributed equally to this study as co-first authors.

Received 2025 August 12; Revised 2025 August 21; Accepted 2025 August 21.

Key message

· Artificial intelligence (AI) holds transformative potential for pediatric healthcare, with applications spanning prevention, diagnosis, treatment, and follow-up across diverse subspecialties; however, ethical concerns, scarcity of pediatric- specific data, and limited funding remain significant challenges.

· International consensus on pediatric AI guidelines, expanding child-specific datasets, and incorporating explainable AI are essential to ensure safety and trust.

· Multicenter collaboration and increased investment can address these gaps, enabling equitable, reliable, and pediatric- centered AI solutions.

Introduction

By 2024, the U.S. Food and Drug Administration had authorized 950 medical devices incorporating artificial intelligence (AI) and machine learning (ML) for potential application in clinical practice [1]. AI/ML continues to be utilized in the medical field for prevention, diagnosis, treatment, and management, reducing the workload of physicians while enhancing accuracy and efficiency. Despite these needs and advantages, of the 692 devices analyzed, only 4 (0.6%) were developed exclusively for children; including those approved for both adults and children raises the total to only 69 (10.0%) [1]. Therefore, the use of AI in pediatrics remains lower than in adults, underscoring the need for further development, implementation, and policy consideration.

The review “Artificial intelligence in pediatric healthcare: current applications, potentials and implementation considerations,” published in the Clinical and Experimental Pediatrics in June 2025, presents timely and comprehensive insights [2]. It examines AI applications in pediatric healthcare, encompassing large language models (LLMs), AI-based devices, and related technologies, evaluating their benefits and limitations. This review also analyzes the primary challenges, delineates the roles of key stakeholders, and proposes targeted recommendations to guide future research. This editorial extends the original publication by outlining ethical guidelines for the application of AI in pediatric healthcare and advancing solutions to address the associated challenges.

AI in pediatric healthcare

AI has been introduced into pediatric healthcare, with applications emerging across various pediatric subspecialties. AI-based devices and surgical systems are already making significant contributions to clinical practice. Using data from the Global Burden of Disease Study 2021, we identified the top 15 diseases contributing the highest to incidence and mortality among individuals under 20 years of age (Fig. 1). Prior studies have examined the applications of AI in pediatric prevention, diagnosis, and treatment. Focusing on leading pediatric diseases, examples include the use of ML to analyze cough sounds for acute respiratory disease detection [3], to differentiate viral from bacterial pneumonia using chest x-rays [4], to generate personalized treatments for infectious diarrhea [5], and to predict early-onset sepsis in neonates [6]. Thus, the integration of AI into pediatric healthcare is steadily expanding and is expected to continue increasing in the future.

Fig. 1.

Global top 15 causes of incidence and mortality among individuals aged under 20 years. COVID-19, coronavirus disease 2019; HIV/AIDS, human immunodeficiency virus/acquired immunodeficiency syndrome.

In this issue, Park et al. [2] provide a comprehensive review of AI applications in pediatric healthcare, including LLMs, AI-based devices, and related technologies, while discussing their respective advantages and limitations. This editorial extends the discussion by focusing on pediatric-specific considerations, including the unique clinical context of children, the potential role of parents and guardians in communication, and the importance of ethical frameworks to guide safe and equitable implementation.

Ethical guidelines for AI in pediatric healthcare

The understanding that children are not merely miniature adults is essential in pediatric healthcare. Pediatric patients necessitate specialized, tailored approaches that account for their unique developmental and physiological characteristics. Given the absence of age-specific best practices in pediatric AI/ML research, pediatric medicine requires appropriate guidelines before developing pediatric- specific AI solutions. To address this need, the ACCEPT AI framework was proposed by Muralidharan et al. [7] in 2023. ACCEPT-AI is founded on 6 core principles (age, communication, consent, equity, data protection, and technological transparency). Each section is accompanied by key recommendations that enable researchers, regulators, and clinicians to translate these ethical principles into practical measures to mitigate age-related algorithmic bias.

While ACCEPT AI establishes the ethical prerequisites for AI introduction in pediatric medicine, the Pediatrics EthicAl Recommendations List for AI (PEARL AI), proposed by Chng et al. [8] in 2025, addresses the unique needs and vulnerabilities of children and focuses on an actionable ethical strategy for clinical practice. This framework is founded on 11 ethical considerations, including nonmaleficence, beneficence, and autonomy, as well as the recommendations of the United Nations Children’s Fund on AI ethics and governance on children. By placing children at the center, it supports ethical decision-making across all phases of the AI lifecycle. Although these frameworks were not validated through an E-Delphi process, they provide an integrated approach to pediatric AI in healthcare. The complementary application of ACCEPT-AI and PEARL-AI can foster an AI ecosystem that is both ethical and practical in advancing pediatric care. Furthermore, the global community should establish unified guidelines for pediatric AI through international consensus based on these expert recommendations.

Future research roadmaps and policy recommendations

LLMs offer clear benefits in reducing the workload of medical professionals and improving patient communication, though significant risks remain. As noted in the review, their performance in clinical settings is insufficiently validated, and the risks of inappropriate responses and the “black box” problem preclude full trust. The “black box” problem is not unique to pediatric healthcare AI but is a fundamental limitation across all AI models, posing significant challenges in this field as well. Moreover, pediatric care is complicated by parents or guardians often communicating for the young patient, creating uncertainty over the ability of LLMs to distinguish these contexts and respond accurately [9]. Therefore, future research should employ explainable AI (XAI) to address these challenges. Techniques such as gradient-weighted class activation mapping, local interpretable model-agnostic explanations, and SHapley Additive exPlanation values can mitigate the black box problem by clarifying the reasoning behind AI outputs. By presenting the rationale for its responses, detecting sensitive expressions, and flagging messages for review, XAI can further support medical decision-making and reduce their burden.

From a policy perspective, it is imperative to establish unified guidelines for pediatric AI through international consensus based on expert recommendations. These guidelines should also provide clear standards for stakeholder accountability in adverse outcomes. Even before the effective development of AI, pediatric healthcare faces significant challenges. These include a high risk of bias due to the limited availability of pediatric testing cohorts and insufficient investment incentives driven by the smaller market size. Of particular concern, regarding research on AI models in intensive care unit, studies involving pediatric populations represent less than half the number of those focused on neonates [10]. This underscores the need for multicenter studies and the incorporation of large-scale, multi-ethnic, and multinational datasets. These efforts will facilitate the development of safe and reliable AI models while enhancing their robustness, thereby enabling the detection of congenital and rare disorders in pediatrics. Furthermore, pediatric healthcare receives significantly less financial support than other populations, hindering the development of pediatric-specific AI models and surgical-assistive robotics. Therefore, governments should allocate greater resources to address these disparities.

Conclusion

AI is steadily transforming pediatric healthcare, with growing applications in prevention, diagnosis, treatment, and follow-up care. However, adoption is constrained by ethical concerns, limited pediatric data, and a lack of investment. Although frameworks such as ACCEPT-AI and PEARL-AI offer valuable ethical guidance, establishing consistent pediatric AI guidelines through international consensus is needed. Multicenter studies are required to expand pediatric-specific datasets and improve model reliability. Furthermore, the incorporation of large-scale, multiethnic, and multinational datasets is equally essential to ensure broader applicability. In addition, given the characteristics of pediatric diseases, robust AI models must also be capable of detecting congenital and rare disorders, while the integration of XAI is essential to enhance trust and transparency. Greater funding could support these initiatives, and stakeholder accountability must be clearly defined. Addressing these priorities will enable safe, equitable, and effective integration of AI tailored to the unique needs of children.

Notes

Conflicts of interest

No potential conflict of interest relevant to this article was reported.

Funding

This work was supported by the Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (RS-2024-00509257, Global AI Frontier Lab). The funders had no role in study design, data collection, data analysis, data interpretation, or writing of the report.

Author Contribution

Dr. DKY had full access to all of the data in the study and took responsibility for the integrity of the data and the accuracy of the data analysis. All authors approved the final version before submission. Study concept and design: YL, SH, and DKY; Acquisition, analysis, or interpretation of data: YL, SH, and DKY; Drafting of the manuscript: YL, SH, and DKY; Statistical analysis: YL, SH, and DKY; Study supervision: DKY; DKY is the senior author. YL and SH contributed equally as first authors. DKY is the guarantor for this study. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

References

1. Muralidharan V, Adewale BA, Huang CJ, Nta MT, Ademiju PO, Pathmarajah P, et al. A scoping review of reporting gaps in FDA-approved AI medical devices. NPJ Digit Med 2024;7:273.
2. Park T, Lee IH, Lee SW, Kong SW. Artificial Intelligence in Pediatric Healthcare: current applications, potentials and implementation considerations. Clin Exp Pediatr 2025;Jun. 25. doi: 10.3345/cep.2025.00962. [Epub].
3. Sharan RV, Rahimi-Ardabili H. Detecting acute respiratory diseases in the pediatric population using cough sound features and machine learning: a systematic review. Int J Med Inform 2023;176:105093.
4. Rickard D, Kabir MA, Homaira N. Machine learning-based approaches for distinguishing viral and bacterial pneumonia in paediatrics: a scoping review. Comput Methods Programs Biomed 2025;268:108802.
5. Kim SS, Codi A, Platts-Mills JA, Pavlinac PB, Manji K, Sudfeld CR, et al. Personalized azithromycin treatment rules for children with watery diarrhea using machine learning. Nat Commun 2025;16:5968.
6. An AY, Acton E, Idoko OT, Shannon CP, Blimkie TM, Falsafi R, et al. Predictive gene expression signature diagnoses neonatal sepsis before clinical presentation. EBioMedicine 2024;110:105411.
7. Muralidharan V, Burgart A, Daneshjou R, Rose S. Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI. NPJ Digit Med 2023;6:166.
8. Chng SY, Tern MJ, Lee YS, Cheng LT, Kapur J, Eriksson JG, et al. Ethical considerations in AI for child health and recommendations for child-centered medical AI. NPJ Digit Med 2025;8:152.
9. Tse G, Zahedivash A, Anoshiravani A, Carlson J, Haberkorn W, Morse KE. Large language model responses to adolescent patient and proxy messages. JAMA Pediatr 2025;179:93–4.
10. Schouten JS, Kalden M, van Twist E, Reiss IK, Gommers D, van Genderen ME, et al. From bytes to bedside: a systematic review on the use and readiness of artificial intelligence in the neonatal and pediatric intensive care unit. Intensive Care Med 2024;50:1767–77.

Article information Continued

Fig. 1.

Global top 15 causes of incidence and mortality among individuals aged under 20 years. COVID-19, coronavirus disease 2019; HIV/AIDS, human immunodeficiency virus/acquired immunodeficiency syndrome.