COMMENTARY
Ting Wang, PhD; David W. Price, MD; Andrew W. Bazemore, MD, MPH
Corresponding Author: Ting Wang, PhD; American Board of Family Medicine
Email: twang@theabfm.org
DOI: 10.3122/jabfm.2024.240385R1
Keywords: Artificial Intelligence, Certification, Continuing Education, Diagnostic Errors, Examination Questions, Family Medicine, Formative Feedback, Large Language Models, Physicians, Self Assessment
Dates: Submitted: 10-23-2024; Revised: 01-02-2025; Accepted: 01-21-2025
Status: In production for ahead of print.
Diagnostic errors are a significant challenge in healthcare, often resulting from gaps in physicians' knowledge and misalignment between confidence and diagnostic accuracy. Traditional educational methods have not sufficiently addressed these issues. This commentary explores how large language models (LLMs), a subset of artificial intelligence, can enhance diagnostic education by improving learning transfer and physicians' diagnostic accuracy. The American Board of Family Medicine (ABFM) is integrating LLMs into its Continuous Knowledge Self-Assessment (CKSA) platform to generate high-quality cloned diagnostic questions, implement effective spaced repetition strategies, and provide personalized feedback. By leveraging LLMs for efficient question generation and individualized learning, the initiative aims to transform continuous certification and lifelong learning, ultimately enhancing diagnostic accuracy and patient care.