What Complexity Science Predicts About the Potential of Artificial Intelligence/Machine Learning to Improve Primary Care ======================================================================================================================== * Richard A. Young * Carmel M. Martin * Joachim P. Sturmberg * Sally Hall * Andrew Bazemore * Ioannis A. Kakadiaris * Steven Lin ## Abstract Primary care physicians are likely both excited and apprehensive at the prospects for artificial intelligence (AI) and machine learning (ML). Complexity science may provide insight into which AI/ML applications will most likely affect primary care in the future. AI/ML has successfully diagnosed some diseases from digital images, helped with administrative tasks such as writing notes in the electronic record by converting voice to text, and organized information from multiple sources within a health care system. AI/ML has less successfully recommended treatments for patients with complicated single diseases such as cancer; or improved diagnosing, patient shared decision making, and treating patients with multiple comorbidities and social determinant challenges. AI/ML has magnified disparities in health equity, and almost nothing is known of the effect of AI/ML on primary care physician-patient relationships. An intervention in Victoria, Australia showed promise where an AI/ML tool was used only as an adjunct to complex medical decision making. Putting these findings in a complex adaptive system framework, AI/ML tools will likely work when its tasks are limited in scope, have clean data that are mostly linear and deterministic, and fit well into existing workflows. AI/ML has rarely improved comprehensive care, especially in primary care settings, where data have a significant number of errors and inconsistencies. Primary care should be intimately involved in AI/ML development, and its tools carefully tested before implementation; and unlike electronic health records, not just assumed that AI/ML tools will improve primary care work life, quality, safety, and person-centered clinical decision making. * Artificial Intelligence * Clinical Decision-Making * Complexity Science * Information Technology * Machine Learning * Medical Informatics * Primary Care Physicians * Primary Health Care * Quality Improvement ## Background Artificial intelligence (AI), and its branch machine learning (ML), have been touted as among the “10 big advances that will improve life, transform computing and maybe even save the planet.”1 (Table 1) Other less dramatic AI/ML supporters say that it will facilitate new opportunities for doctor-patient connection,2 and if implemented wisely, AI can free up physicians’ cognitive and emotional space for their patients, even helping them to become better at being human.”3 Some primary care clinicians are excited at the possibility of AI/ML and envision many potential uses,3–5 whereas others are more apprehensive, believing that the doctor-patient relationship is founded on communication and empathy,6 and that AI/ML cannot duplicate this.7 View this table: [Table 1.](http://www.jabfm.org/content/37/2/332/T1) Table 1. Overview of Artificial Intelligence and Deep Learning The recent hype over nonhealthcare AI/ML applications demonstrates that the potential benefits and harms of AI/ML are on many peoples’ minds. Autonomous vehicles have had some successes, but also failures leading to fatal accidents that have prompted regulatory scrutiny.8 The large language model ChatGPT demonstrated success at passing standardized legal exams, but when tasked to write legal briefs, it wrote about nonexisting case law precedents.9 In health care, ChatGPT has successfully answered medical license test questions,10 but ChatGPT has been found to produce errors in both stating facts and synthesizing data from the medical literature,11 and there is now evidence emerging that the mass use of ChatGPT actually worsens its accuracy and reliability.12 Complexity science may provide some insight into which predictions about the future of AI/ML in primary care are full of reason or full of hype (Table 2). Briefly, complexity arises from the interconnectedness and interdependence of multiple agents in a particular context (a hospital setting vs a clinic vs the community, eg,).13 The dynamics between these agents results in feedback loops that alter the nature and behavior of its agents, that is, they adapt to changing circumstances resulting in emergent behaviors, which can be expected, but not predicted.14,15 Complex systems consist of a large number of elements that in themselves can be simple. Even if specific elements only interact with a few others, the effects of these interactions are propagated throughout the system, and these interactions are nonlinear. An everyday example could be a shortage of toilet article “caused” by the COVID-19 virus. Other aspects of complex adaptive systems (CAS) are further explained in the Appendix. View this table: [Table 2.](http://www.jabfm.org/content/37/2/332/T2) Table 2. Salient Features of Complex Adaptive Systems Primary care must manage many interconnected and interdependent issues, and thus by definition is complex. Unlike specialty care, there is a breadth of undifferentiated patient presentations and irregular timing of presentations to primary care and acute services.16,17 Their clinic visits are more complex than specialists’ visits.18 Primary care providers have to navigate the greatest volume of patient care information across all other health care entities such as hospitals, nursing homes, lab and imaging facilities, specialists, insurance companies, government agencies, pharmacies, home health agencies, and so on. AI/ML could help or harm managing this information. Supporters believe that AI/ML will revolutionize primary care—improving risk prediction and intervention, dispensing medical advice and triage, improving clinic workflow, broadening diagnostic capabilities, assisting in clinical decision making, assisting in clerical work, and aiding population or panel management including risk assessments, and remote patient monitoring.3 However, primary care AI/ML implementation research remains at a very early stage of maturity,19 and as with many technological advances before, there is no guarantee that AI/ML will successfully transform care delivery and/or care outcomes. The purpose of this article is to discuss the opportunities and challenges of AI/ML in primary care, seen through the lens of CAS. ## The Opportunities and Challenges of AI/ML for Primary Care ### Detection/Diagnosis of Single Diseases Early examples of successful implementation of AI/ML include analyzing data from images to diagnose specific conditions. Examples include retinal scans to diagnose diabetic retinopathy,20 data from mammogram imaging to identify radiographic images suggestive of breast cancer,21 using wearables to detect atrial fibrillation,22 and using tele-dermatology with AI/ML assistance to improve diagnostic accuracy for skin lesions.23 AI/ML successes have been described as less that of a certain diagnosis and more like a good guess at what the answer might be,21 which helps explain why the impact of AI/ML even in a digital field such as diagnostic radiology has been only modest so far.24 For primary care, AI/ML might augment a physician’s knowledge and confidence to diagnose rare diseases.25 However, even apparent successes of AI/ML for diagnostic outcomes have been found to be nonsensical when deeply explored post hoc. For example, an AI tool for detecting melanomas in photographs of skin lesions did so by recognizing that photographs with cancer were more likely to have small rulers in the image.27 Judging AI/ML accuracy in clinical diagnosis is particularly challenging outside of well-structured case vignettes.28 Although AI/ML-trained systems may aid the diagnostic process, it cannot determine the final diagnosis, which involves human interactions, judgments, and social systems understandings that are beyond what computers can model.29 ### Treat Specific Diseases AI/ML also has the potential to inform decision making by quickly synthesizing a wide variety of information from the medical literature or electronic health record (EHR). It could also incorporate patient pathways including hospital discharge summaries, drug databases, drug-drug and drug-disease interactions with the ability to analyze large amounts of data and discover correlations that may have been missed by researchers and health care providers, enhancing patient-centered care.30 For example, a hospital bedside AI/ML-based consultation service had only a limited effect on treatment decisions in 10 out of 100 queries, mostly involving unusual understudied patient presentations or rare diseases.31 AI/ML was not successful at improving cancer treatment, which may more closely reflect the complexity of primary care. Studies concluded that IBM Watson did not improve on the decisions of oncologists, and the project was abandoned.32,33 Attempts involving diagnoses that require integrating clinical findings, a crucial task in primary care, have not achieved the same success as single disease efforts.34 AI/ML models based purely on historic data would only learn the prescribing habits of physicians in retrospect that may not represent an ideal state in emerging practices.35 For example, computerized decision support systems (CDSS) designed with a high tolerance for risk favored algorithm performance over patient safety, potentially exposing patients to inappropriate medications.36 Determining if AI/ML improves patient outcomes remains the most important test, and currently there is scant evidence of downstream benefit. AI/ML optimists consider real world data such as pharmaceutical postmarketing surveillance as a valid source of evidence to connect treatments with outcomes.37 Skeptics believe AI/ML is no substitute for more rigorous methodologies, which may include randomized controlled trials or learning health system approaches.38 ### Predict Future Health Events AI/ML may provide new opportunities to construct more acute predictions of disease risk, which could inform smarter decision making algorithms.39 Oak Street Health implemented an ML approach that increased the number of patients identified at high risk for hospitalization compared with retrospective models, but not improve mortality.40 ML may be particularly useful when dealing with “wide data,” where the number of subjects is greater than that of input variables. Other observers believe most studies on ML-based prediction models show poor methodological quality and are at high risk for bias.41 Existing prediction models that use large data sets and AI/ML can give similar population level estimates of cardiovascular disease, while giving vastly different answers for individual patients.42 An explanation for this discrepancy is that ML focuses on the strength of the correlation between variables rather than the direction of causality,26 and ML may add little value to predictions of future events compared with traditional methods. And even if one assumes that AI/ML applications might increase the predictive accuracy for future events, it does not mean that there is an action available to decrease the risk, which in turn raises important ethical concerns. ### Decrease Administrative Burden AI/ML using voice recognition has been implemented to listen to a physician-patient encounter and document a preliminary note, or autocharting.43 A tool called Suki was found in a demonstration project to reduce documentation time with this technology by 72%,44 critical in an age where such administrative burden has been clearly linked to growing rates of reported US clinician burnout.45 An assessment of the performance of ChatGPT to generate histories of present illness documentation in medical notes found that it sometimes reported information that was not present in the source dialog, an error called “hallucinating.”46 Other possible uses include optimizing coding for value-based payments and automating aspects of previsit planning.3 However, a substantial proportion of patient symptoms in primary care are vague, such that even human scribes present in the room do not agree how to document them.43 A possible distinction between these disparate findings may be recording and categorizing information (converting spoken history and physical examination elements to a dictated note) versus improving understanding. AI/ML used to automate prior authorizations has also been proposed.47 ### Expand Primary Care Capacity Some AI/ML innovations aimed to extend the work of the primary care team beyond the office visit, the home visit, and even beyond telehealth. Conversational agents using AI/ML improved depression scores in small 2-week trial, a time frame that is likely clinically irrelevant.48 A recent randomized comparative effectiveness trial indicated that AI/ML could allow many more patients to be served effectively by CBT-CP programs using the same number of therapists.49 Other examples of AI/ML used for single-issue primary care tasks include mental health assessments during telehealth calls,50 mental health support,51 and weight management.52 AI/ML may help with remote monitoring, for example, alerting patients and doctors when a continuous glucose monitor stops functioning, or decreasing false alarms in telemetry units.53 However, algorithms that are applied repeatedly to track a patient’s condition likely will trigger repeated false alarms or information the clinician is already aware of, which contributes to alarm fatigue.54 ## Data Issues—Signal, Noise, and Action ### Data Accuracy Some of the successes of AI/ML outside of health care have been noteworthy, for example, the progression of AI to defeat humans in the games of checkers, chess, then Go.55 These programs were trained on tens of millions of previous games, where the input data were essentially perfect. In contrast, “noisy” data decreases classification accuracy and prediction results in ML.56 In fact, the information that should be classified as signal versus noise is difficult to determine even for highly focused questions in medicine, for example, determining if heart rate variability derives more from normal physiologic events (stress), normal “abnormal” states (sinus bradycardia in a young athlete), or disease (paroxysmal tachycardias).57 Noisy data are a challenge in all potential uses of AI/ML, but will likely be an especially significant barrier to the utility of ML in primary care, where inaccurate data are already abundant, which erodes accuracy of ML predictive models.58 In a detailed investigation of EHR data fidelity, diagnostic codes for hypoglycemia were found to have moderate positive predictive value (69.2%) and moderate sensitivity (83.9%).59 Accuracy rates of medical registry data ranged from 67% to 100% and completeness ranged from 30.7% to 100% in another study.60 Important drivers of EHR inaccuracies are copied and pasted notes, a work pattern that is not likely to change going forward.61 In addition, the timeliness and accessibility of EHR data are challenging. Raw data from an EHR must be cleaned and formatted before use, but if this process is delayed, the models cannot be applied in real time for patient care.62 Whether ML approaches can sort out which data inconsistencies are errors, and which add useful information, remains unknown. ### Actionable Data AI/ML has been shown to predict risks of future events in some cases. But merely providing the probability of a particular outcome, such as readmission or mortality risk, is unlikely to change physician or patient behavior in most primary care settings, as physicians incorporate their patient’s unique context and preferences into their decision making.63 Poor calibration might often be expected with application of a model from one population to another, which can lead to harmful decisions.63,64 Although modern imputation methods can mitigate some bias due to missing information, these methods are less useful in EHR settings where it is not possible to distinguish the true absence of a relevant characteristic (such as a particular comorbidity) from data incompleteness. Even if efforts are undertaken to maximize calibration, patterns seen in existing data sets should be considered as no more than hypothesis generating, and will still require classic hypothesis testing.65 A summary of the potential uses and features of AI/ML applications for which primary care physicians could be excited or apprehensive is provided in Table 3. View this table: [Table 3.](http://www.jabfm.org/content/37/2/332/T3) Table 3. Excited or Apprehensive—Likely Uses of Artificial Intelligence/Machine Learning from a Complexity Perspective ### New Approaches May Emerge Using AI/ML in Complex Adaptive Systems Complexity principles have been used to improve health system outcomes with AI/ML at the macro-level. Using hospital admissions and emergency data, Monash Health in Victoria, Australia developed an algorithm to predict patients who were at high risk of readmissions within the next 12 months. The algorithm had a positive predictive value of 33%.66,67 Monash Health leveraged the primary care teams’ preexisting relationships with patients and initiated an outreach team of medical, allied health and community health workers (CHWs) to augment the care by primary care physicians using an online system to predict deteriorating patient journeys. Nonclinicians made regular monitoring phone calls (≥ weekly) prompted by a clinical algorithm that continually predicted unmet needs and risk of deterioration or admission based on the most recent phone calls. The intense outreach effort using conversations aimed to address unintended repeat investigations, loneliness, hospital infection, and posthospital syndrome, aligned with the goals of patients.68 Interventions the clinical teams developed were not defined a priori, but emerged through dynamic feedback loops between the interacting agents. They reduced readmissions by 1.1 bed-day per person per month, with a > 60% participation rate in eligible patients.68 The physicians, wider teams, and CHWs could continually update their interventions, which is a feature of a well-functioning CAS. Clinical teams learned from their ongoing interactions with patients and adjusted their recommendations based on each patient’s personal journey. The AI/ML software looked at individual call data, records of all the calls and outcomes, and all patients in the database. It improved CHWs’ prediction of an event, that is, the likelihood that something would happen before the next weekly call, by 80%.69 In this example, the primary care teams used AI/ML to manage a large volume of data that would be difficult and expensive for humans, but relied on existing relationships and clinician judgment to determine the best actions. The researchers concluded that the algorithm determined only 10% to 20% of the success; the primary care team workflow determined 80% to 90%. ## Other Concerns with AI/ML ### Health Equity The US Food and Drug Administration and others realize AI/ML has inclusiveness problems and may exacerbate outcome differences in vulnerable populations. There are many types of biases in machine learning.70 Many data sets lack diversity and completeness of data by gender, race, ethnicity, geographic location, and socioeconomic status.71,72 For example, AI/ML was used to identify cancerous moles in whites, but not blacks.73 In contrast, AI/ML has been shown to positively impact implicit racial bias in the prevention of deep vein thrombosis.74 Human health care workers certainly have biases too. Datasets that reflect historic disparities in care related to racism and privilege have been shown to produce AI/ML results that retain these biases and thereby perpetuate the structural disadvantages and disparities.75 Users of AI/ML may not even recognize the biases. Clinicians with a propensity to trust suggestions from AI/ML support systems may discount other relevant information, leading to so-called automation complacency.76 To combat this, fairness audits were used to reflect on AI/ML performance in prompts for end-of-life care planning, and found application performance differences by race/ethnicity, for example, in Hispanic/Latino males whose race was recorded as “other.”77 This particular audit required 115 person-hours and did not add clinically meaningful information due to poor demographic data quality and lack of data access. AI/ML was also largely unsuccessful at incorporating social determinants of health indicators into prospective risk adjustment for private insurance payments in the U.S improving the predictive ratio by only 3%,78 though this performance may worsen over time as so-called latent biases emerge with subsequent use of an AI/ML tool.79 It is beyond the scope of this commentary to review other concerns about AI/ML such as privacy, data ownership and transferability, intellectual property, cybersecurity, and medical liability for the creators and owners of AI/ML tools.80 AI/ML was recently challenged to “not just replicate human thinking processes, but should aim to exceed them,” a bar that is likely insurmountable for many aspects of complex primary care.65 ## Discussion Despite the emergence of intriguing Al/ML tools such as ChatGPT, successful transformation of primary care using AI/ML is far from guaranteed. Primary care should play a critical role in developing, introducing, implementing, and monitoring AI/ML tools, especially regarding common symptoms, acute diseases, chronic diseases, and preventive services.81 To avoid making the same mistakes with AI/ML implementation as happened with the forced EHR implementation onto primary care without adequate vetting, US policy makers should assume that AI/ML products will only improve primary care if its stakeholders are heavily involved in its development, piloting, vetting, and wider implementation. No tool can account for the inherent complexity of primary care. It is not the existence of AI/ML tools that are the potential problem, but the way the tools are used that matters–are they ultimately able to integrate the tacit domains required for participatory, effective, and ethical decision making? Do the potential cognitive and data-management improvements of AI/ML add any value to patient outcomes beyond the pre-existing deep relationships between primary care clinicians and their patients? Complexity science recognizes that primary care decision making emerges from not only the doctor-patient relationships and knowledge of confounding factors for treating an individual patient such as comorbidities, social determinant challenges, and unique patient attitudes and beliefs, but also the interrelated hierarchical layers and feedback loops of health care systems.13 AI/ML approaches may harm primary care by acting to minimize an understanding of these complexities, often by limiting the number of features that it uses to develop its algorithms.57 In fact, many potential AI/ML tools are described as synthesizing information (EHR data and billing data) from the front lines of the health care system hierarchy and sending conclusions to the macro administrative layers (analysts and administrators), which is opposite that of a natural and sustainable CAS.82 The series of interventions in Victoria, Australia demonstrate a CAS-consistent flow of information. AI/ML was used to collate a large amount of data to identify patients who were potentially about to “tip” into worse health states, but the synthesized information was not sent to the top levels of the system hierarchy, but rather to the front-line clinicians. In addition, AI/ML was not used to make medical decisions of how to respond to the patients flagged as being at increased risk for hospitalization. What made this AI/ML application successful was not only the model itself (which actually played a relatively small role), but more importantly the way the model augmented existing relationships and human-driven processes of care. A key limitation of AI/ML lies in the fact that its predictions arise from existing data. It cannot, at least at this point in time, synthesize the multiple perspectives of a health professional in the context of the patient in front of them. Creating new data upfront for unmet clinical needs and specific purposes is time and resource expensive, but gives the best chance of being useful in practice. A deliberative patient-physician relationship is important for healing, particularly for complex conditions and when there is a high risk of adverse effects, because individual patients’ preferences differ.83 There are no algorithms for such situations, which change depending on emotions, nonverbal communication, values, personal preferences, prevailing social circumstances, and many other factors. For example, AI/ML will not likely reduce the uncertainty inherent in making ethical decisions about care at the end of life. AI/ML sceptics point out that algorithms and prediction instruments, ironically, exercise tyranny over the true freedom of moral agency that we claim to be respecting in our patients.84 Policy makers (and investors) should not just assume that AI/ML tools can significantly improve the complex person-centered work of primary care physicians and their teams. Useful applications of AI/ML in primary care will undoubtedly emerge. Complexity science suggests that it is much more likely that these tools may assist primary care with discrete functions with highly focused outcomes, but it is very unlikely that AI/ML tools will replace complex relationship-centered decision making by physicians and their teams (though the team composition may evolve if administrative burden can truly be reduced). ## Acknowledgments The authors acknowledge Jacqueline Kueper, PhD, Ginetta Salvaggio, MD, and C. J. Peek, PhD, for their participation in the original NAPCRG Forum and comments on this subject. ## Appendix. ### Complex Adaptive Systems Further Explained Complex systems contain many direct and indirect feedback loops. Complex systems are open systems—they exchange energy or information with their environment (their context) — and operate dynamically. Any complex system thus has a history, and the history is of cardinal importance to the behavior of the system influenced by its previous path. Because the interactions are rich, dynamic, fed back, and, above all, nonlinear, the behavior of the system as a whole cannot be predicted from an inspection of its components. The notion of “emergence” is used to describe this aspect. The presence of emergent properties does not provide an argument against causality, only against deterministic forms of prediction. Complex systems are adaptive. They can (re)organize their internal structure without the intervention of an external agent.1 #### Practice Domains Knowledge itself is complex and thus not all knowledge contributes equally to what we know.2 The Cynefin framework (Figure 1a) is 1 approach to understand decision making in complex systems, and it helps visualize the medical knowledge domains (Figure 1b) that facilitate clinical decision making in a primary care context.3 In the obvious quadrant, direct cause and effect relies on explicit knowledge trials such as randomized controlled trials (RCTs) and meta-analyses of RCTs. This is the domain most conducive to monitoring by simple single disease guidelines. Perhaps the recent success of large language models such as ChatGPT to answer medical license test questions fit in this domain.4 On the other hand, ChatGPT has been found to produce errors in stating facts and synthesizing data from the medical literature.5 ![Figure A1.](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/37/2/332/F1.medium.gif) [Figure A1.](http://www.jabfm.org/content/37/2/332/F1) Figure A1. **The Cynefin domains and its application to knowledge in medicine.**6 In the complicated quadrant, cause and effect are discernable through multilayered interacting parts that also rely on explicit knowledge trials. In these clinical scenarios, complicating factors such as comorbidities may influence patient care recommendations, but the relationships of inputs and outputs are linear and follow parametric patterns. An example is balancing the negative impacts of comorbidities on a decision of whether a major surgical procedure is more likely to benefit or harm a patient. In the complex quadrant, there are so many interacting parts whose relationships are perceivable, but not fully predictable in real time, and thus it is difficult to predict the behaviors and outcomes based on the knowledge of its component parts. Multiple layers of system hierarchies interact through nonlinear feedback loops with nonlinear relationships between inputs and outputs, making outcomes unpredictable and sometimes surprising. Relationships between inputs and outputs often follow log-linear, or Pareto distributions. An example is caring for a dying patient and her family, balancing all the often competing organic medical, psychological, legal, and familial needs. It is often tacit knowledge rather than explicit data that direct care for these complex needs. In the chaotic quadrant, the various components have no apparent relationship to each other, leading to a crisis with an emergent new order and/or a breakdown of the existing order, for example, the early days of the COVID-19 pandemic. The ability of AI/ML to add value to care delivery likely diminishes as 1 moves from the simple to the chaotic domains. A further examination of complex system understandings and the role of AI/ML is shown in Table A1. Most existing successes in AI/ML represent small and constrained components of the complex health care system such as measuring pixels on an image to help make a diagnosis, or using voice recognition to monitor and treat a single mental health concern. The data used are relatively confined and have linear deterministic relationships with outcomes (a direct predictable link from input to output). AI/ML has generally failed when more complexities are considered across different informational silos, agents, and hierarchies; and when the relationship between data input and desired outcomes are nonlinear with power law distributions of inputs and outputs, include feedback loops, and are nondeterministic. View this table: [Table A1.](http://www.jabfm.org/content/37/2/332/T4) Table A1. AI/ML Features and Their Relationship to Complex Adaptive Systems and Primary Care ## Notes * This article was externally peer reviewed. * *Conflict of interest:* Dr. Young discloses that he is the sole owner of SENTIRE, LLC, which is a novel primary care documentation, coding, and billing system. Dr. Lin is a principal investigator working with companies and non-profit organizations through grants and sponsored research agreements administered by Stanford University. Current and previous collaborators include Amazon, American Academy of Family Physicians, American Board of Family Medicine, Center for Professionalism and Value in Health Care, Codex Health, DeepScribe, Google Health, Omada Health, Predicta Med, Quadrant Technologies, Soap Health, Society of Teachers of Family Medicine, University of California San Francisco, and Verily. With the sole exception of Codex Health, where he serves as VP of Health Sciences as a paid consultant, neither he nor any members of his immediate family have any financial interest in these organizations. Dr. Lin is the James C. Puffer/American Board of Family Medicine Fellow at the National Academy of Medicine. The opinions expressed are solely his own and do not represent the views or opinions of the National Academies. The other authors declare no conflicts. * *Funding:* None. * To see this article online, please go to: [http://jabfm.org/content/37/2/332.full](http://jabfm.org/content/37/2/332.full). * Received for publication June 6, 2023. * Revision received August 8, 2023. * Accepted for publication August 10, 2023. ## References 1. 1.Scientific American. World Changing Ideas 2015. *Scientific American*. 2015. 2. 2.Liaw W, Kakadiaris IA. Primary care artificial intelligence: a branch hiding in plain sight. Ann Fam Med May 2020;18:194–5. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6ODoiMTgvMy8xOTQiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzM3LzIvMzMyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 3. 3.Lin SY, Mahoney MR, Sinsky CA. Ten ways artificial intelligence will transform primary care. J Gen Intern Med 2019;34:1626–30. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11606-019-05035-1&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 4. 4.Kueper JK, Terry A, Bahniwal R, et al. Connecting artificial intelligence and primary care challenges: findings from a multi stakeholder collaborative consultation. BMJ Health Care Inform 2022;29:e100493 5. 5.Lin S. A clinician's guide to artificial intelligence (AI): why and how primary care should lead the health care AI revolution. J Am Board Fam Med 2022;35:175–84. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NToiamFiZnAiO3M6NToicmVzaWQiO3M6ODoiMzUvMS8xNzUiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzM3LzIvMzMyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 6. 6.McWhinney IR. ‘An acquaintance with particulars…’ Fam Med 1989;21:296–8. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=2753257&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 7. 7.Blease C, Kaptchuk TJ, Bernstein MH, Mandl KD, Halamka JD, DesRoches CM. Artificial intelligence and the future of primary care: exploratory qualitative study of UK general practitioners' views. J Med Internet Res 2019;21:e12802. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.2196/12802&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 8. 8.Boudette N, Metz C, Ewing J. Tesla autopilot and other driver-assist systems linked to hundreds of crashes. New York Times. New York Times. June 15, 2022. Available at: [https://www.nytimes.com/2022/06/15/business/self-driving-car-nhtsa-crash-data.html](https://www.nytimes.com/2022/06/15/business/self-driving-car-nhtsa-crash-data.html). 9. 9.Milmo D. Two US lawyers fined for submitting fake court citations from ChatGPT. The Guardian. June 23, 2023. Accessed Aug 2, 2023. Available at: [https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt](https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt). 10. 10.Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models. PLOS Digit Health 2023;2:e0000198. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1371/JOURNAL.PDIG.0000198&link_type=DOI) 11. 11.van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature 2023;614:224–6. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1038/d41586-023-00288-7&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=36737653&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 12. 12.Chen L, Zaharia M, Zou J. How is ChatGPT’s behavior changing over time? 2023. arXiv230709009 13. 13.Martin CM, Sturmberg JP. General practice–chaos, complexity and innovation. Med J Aust 2005;183:106–9. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=16022628&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 14. 14.Burton C, Elliott A, Cochran A, Love T. Do healthcare services behave as complex systems? Analysis of patterns of attendance and implications for service delivery. BMC Med 2018;16:138. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/s12916-018-1132-5&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 15. 15.Martin CM. Self-rated health: patterns in the journeys of patients with multi-morbidity and frailty. Evaluation Clinical Practice 2014;20:1010–6. 16. 16.Surate Solaligue DE, Hederman L, Martin CM. What weekday? How acute? An analysis of reported planned and unplanned GP visits by older multi-morbid patients in the Patient Journey Record System database. Evaluation Clinical Practice 2014;20:522–6. 17. 17.Burton C, Stone T, Oliver P, Dickson JM, Lewis J, Mason SM. Frequent attendance at the emergency department shows typical features of complex systems: analysis of multicentre linked data. Emerg Med J . 2021;39:3–9.; 18. 18.Katerndahl D, Wood R, Jaen CR. Complexity of ambulatory care across disciplines. Healthcare 2015;3:89–96. 19. 19.Kueper JK, Terry AL, Zwarenstein M, Lizotte DJ. Artificial intelligence and primary care research: a scoping review. Ann Fam Med 2020;18:250–8. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6ODoiMTgvMy8yNTAiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzM3LzIvMzMyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 20. 20.Schmidt-Erfurth U, Sadeghipour A, Gerendas BS, Waldstein SM, Bogunovic H. Artificial intelligence in retina. Prog Retin Eye Res 2018;67:1–29. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.preteyeres.2018.07.004&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 21. 21.Becker AS, Marcon M, Ghafoor S, Wurnig MC, Frauenfelder T, Boss A. deep learning in mammography: diagnostic accuracy of a multipurpose image analysis software in the detection of breast cancer. Invest Radiol 2017;52:434–40. 22. 22.Perez MV, Mahaffey KW, Hedlin H, Apple Heart Study Investigatorset al. Large-scale assessment of a smartwatch to identify atrial fibrillation. N Engl J Med 2019;381:1909–17. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1056/NEJMoa1901183&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 23. 23.Jain A, Way D, Gupta V, et al. Development and assessment of an artificial intelligence-based tool for skin condition diagnosis by primary care physicians and nurse practitioners in teledermatology practices. JAMA Netw Open 2021;4:e217249. 24. 24.Rajpurkar P, Lungren MP. The current and future state of AI interpretation of medical images. N Engl J Med 2023;388:1981–90. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1056/NEJMra2301725&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=37224199&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 25. 25.Adler-Milstein J, Chen JH, Dhaliwal G. Next-generation artificial intelligence for diagnosis: from predicting diagnostic labels to “wayfinding.” Jama 2021;326:2467–8. 26. 26.Rowe M. An introduction to machine learning for clinicians. Acad Med 2019;94:1433–6. 27. 27.Narla A, Kuprel B, Sarin K, Novoa R, Ko J. Automated classification of skin lesions: from pixels to practice. J Invest Dermatol 2018;138:2108–10. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.jid.2018.06.175&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=30244720&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 28. 28.Kulkarni PA, Singh H. Artificial intelligence in clinical diagnosis: opportunities, challenges, and hype. JAMA 2023;330:317–8. 29. 29.Chen JH, Dhaliwal G, Yang D. Decoding artificial intelligence to achieve diagnostic excellence: learning from experts, examples, and experience. JAMA 2022;328:709–10. 30. 30.Jing B, Boscardin WJ, Deardorff WJ, et al. Comparing machine learning to regression methods for mortality prediction using Veterans Affairs electronic health record clinical data. Med Care 2022;60:470–9. 31. 31.Callahan A, Gombar S, Cahan EM, et al. Using aggregate patient data at the bedside via an on-demand consultation service. N Engl J Med Catalyst 2021;2. 32. 32.Schmidt CMD. Anderson breaks with IBM Watson, raising questions about artificial intelligence in oncology. J Natl Cancer Inst 2017;109. 33. 33.Strickland E. IBM Watson, heal thyself: how IBM overpromised and underdelivered on AI health care. IEEE Spectr 2019;56:24–31. 34. 34.Detsky AS. Learning the art and science of diagnosis. JAMA 2022;327:1759–60. 35. 35.Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med 2019;380:1347–58. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1056/NEJMra1814259&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 36. 36.Wachter RM, Howell MD. Resolving the productivity paradox of health information technology: a time for optimism. JAMA 2018;320:25–6. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 37. 37.Davies N. The FDA and artificial intelligence. 2021;07-06-2021. Accessed Feb 13, 2023. Available at: [https://www.thepharmaletter.com/article/the-fda-and-artificial-intelligence](https://www.thepharmaletter.com/article/the-fda-and-artificial-intelligence). 38. 38.Beam AL, Manrai AK, Ghassemi M. Challenges to the reproducibility of machine learning models in health care. JAMA 2020;323:305–6. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 39. 39.Agha L, Skinner J, Chan D. Improving efficiency in medical diagnosis. Jama 2022;327:2189–90. 40. 40.Bhatt S, Cohon A, Rose J, et al. Interpretable machine learning models for clinical decision-making in a high-need, value-based primary care setting. N Engl J Med Catalyst 2021;2. 41. 41.Andaur Navarro CL, Damen JAA, Takada T, et al. Risk of bias in studies on prediction models developed using supervised machine learning techniques: systematic review. BMJ 2021;375:n2281. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE3OiIzNzUvb2N0MjBfMy9uMjI4MSI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMzcvMi8zMzIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 42. 42.Li Y, Sperrin M, Ashcroft DM, van Staa TP. Consistency of variety of machine learning and statistical models in predicting clinical risks of individual patients: longitudinal cohort study using cardiovascular disease as exemplar. BMJ 2020;371:m3919. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE3OiIzNzEvbm92MDRfOC9tMzkxOSI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMzcvMi8zMzIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 43. 43.Rajkomar A, Kannan A, Chen K, et al. Automatically charting symptoms from patient-physician conversations using machine learning. JAMA Intern Med 2019;179:836–8. 44. 44.Marso A, Waldren SE. Five administrative tasks technology could make easier for physicians. Fam Pract Manag 2022;29:5–8. 45. 45.Rao SK, Kimball AB, Lehrhoff SR, et al. The impact of administrative burden on academic physicians: results of a hospital-wide physician survey. Acad Med 2017;92:237–43. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1097/ACM.0000000000001461&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 46. 46.Nayak A, Alkaitis MS, Nayak K, Nikolov M, Weinfurt KP, Schulman K. Comparison of history of present illness summaries generated by a chatbot and senior internal medicine residents. JAMA Intern Med 2023;183:1026–7. 47. 47.Sahni NR, Carrus B. Artificial intelligence in U.S. health care delivery. N Engl J Med 2023;389:348–58. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1056/nejmra2204673&link_type=DOI) 48. 48.Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health 2017;4:e19. 49. 49.Piette JD, Newman S, Krein SL, et al. Patient-centered pain care using artificial intelligence and mobile health tools: a randomized comparative effectiveness trial. JAMA Intern Med 2022;182:975–83. 50. 50.Iyer R, Nedeljkovic M, Meyer D. Using voice biomarkers to classify suicide risk in adult telehealth callers: retrospective observational study. JMIR Ment Health 2022;9:e39807. 51. 51.Mehta A, Niles AN, Vargas JH, Marafon T, Couto DD, Gross JJ. Acceptability and effectiveness of artificial intelligence therapy for anxiety and depression (Youper): longitudinal observational study. J Med Internet Res 2021;23:e26771. 52. 52.Chew HSJ, Ang WHD, Lau Y. The potential of artificial intelligence in enhancing adult weight loss: a scoping review. Public Health Nutr 2021;24:1993–2020. 53. 53.Au-Yeung WM, Sahani AK, Isselbacher EM, Armoundas AA. Reduction of false alarms in the intensive care unit using an optimized machine learning based approach. NPJ Digit Med 2019;2:86. 54. 54.Freeman K, Geppert J, Stinton C, et al. Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. BMJ 2021;374:n1872. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE4OiIzNzQvc2VwMDFfMTMvbjE4NzIiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzM3LzIvMzMyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 55. 55.Li F, Du Y. From AlphaGo to power system AI. IEEE power and Energy Mag 2018;16:76–84. p. 56. 56.Gupta S, Gupta A. Dealing with noise problem in machine learning data-sets: a systematic review. Procedia Computer Science 2019;161:466–74. 57. 57.Sturmberg JP, Martin CM. *Handbook of sytems and complexity in health*. New York: Springer; 2013. 58. 58.Goldstein BA, Navar AM, Pencina MJ. Risk prediction with electronic health records: the importance of model validation and clinical context. JAMA Cardiol 2016;1:976–7. 59. 59.Yang TH, Ziemba R, Shehab N, et al. Assessment of International Classification of Diseases, Tenth Revision, Clinical Modification (ICD-10-CM) code assignment validity for case finding of medication-related hypoglycemia acute care visits among medicare beneficiaries. Med Care 2022;60:219–26. 60. 60.Stein HD, Nadkarni P, Erdos J, Miller PL. Exploring the degree of concordance of coded and textual data in answering clinical queries from a clinical data repository. J Am Med Inform Assoc 2000;7:42–54. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1136/jamia.2000.0070042&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=10641962&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 61. 61.Vaghani V, Wei L, Mushtaq U, Sittig DF, Bradford A, Singh H. Validation of an electronic trigger to measure missed diagnosis of stroke in emergency departments. J Am Med Inform Assoc 2021;28:2202–11. 62. 62.Peterson ED. Machine learning, predictive analytics, and clinical practice: can the past inform the present? JAMA 2019;322:2283–4. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.2019.17831&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=31755902&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 63. 63.Shah ND, Steyerberg EW, Kent DM. Big data and predictive analytics: recalibrating expectations. JAMA 2018;320:27–8. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.2018.5602&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 64. 64.Cooper Z, Craig SV, Gaynor M, Van Reenen J. The price ain't right? Hospital prices and health spending on the privately insured. Dec 2015. Accessed 1/23/2023. Available at: [www.healthcarepricingproject.org/sites/default/files/pricing\_variation\_manuscript\_0.pdf](http://www.healthcarepricingproject.org/sites/default/files/pricing_variation_manuscript_0.pdf). 65. 65.Cutler DM. What artificial intelligence means for health care. JAMA Health Forum 2023;4:e232652. 66. 66.Ferrier D, Diver F, Corin S, McNair P, Cheng C. HealthLinks: incentivising better value chronic care in Victoria. International Journal of Integrated Care 2017;12:A129. 67. 67.Ferrier D, Campbell D, Hamilton C, et al. Co-designing a new approach to delivering integrated services to chronically ill patients within existing funding constraints–Victoria's HealthLinks trial. Int J Integr Care 2019;19:322. 68. 68.Martin C, Hinkley N, Stockman K, Campbell D. Capitated telehealth coaching hospital readmission service in Australia: pragmatic controlled evaluation. J Med Internet Res 2020;22:e18046. 69. 69.Martin CM, Vogel C, Grady D, et al. Implementation of complex adaptive chronic care: the Patient Journey Record system (PaJR). Evaluation Clinical Practice 2012;18:1226–1234. 70. 70.Gianfrancesco MA, Tamang S, Yazdany J, Schmajuk G. Potential biases in machine learning algorithms using electronic health record data. JAMA Intern Med 2018;178:1544–1547. 71. 71.Reyna MA, Nsoesie EO, Clifford GD. Rethinking algorithm performance metrics for artificial intelligence in diagnostic medicine. JAMA 2022;328:329–330. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.2022.10561&link_type=DOI) 72. 72.Ibrahim SA, Pronovost PJ. Diagnostic errors, health disparities, and artificial intelligence: a combination for health or harm? JAMA Health Forum 2021;2:e212430. 73. 73.Noor P. Can we trust AI not to further embed racial bias and prejudice? BMJ 2020;368:m363. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE2OiIzNjgvZmViMTJfMi9tMzYzIjtzOjQ6ImF0b20iO3M6MjA6Ii9qYWJmcC8zNy8yLzMzMi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 74. 74.Lau BD, Haider AH, Streiff MB, et al. Eliminating health care disparities with mandatory clinical decision support: the venous thromboembolism (VTE) example. Med Care 2015;53:18–24. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1097/MLR.0000000000000251&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=25373403&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 75. 75.Parikh RB, Teeple S, Navathe AS. Addressing bias in artificial intelligence in health care. JAMA 2019;322:2377–2378. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.2019.18058&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=31755905&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 76. 76.Parasuraman R, Manzey DH. Complacency and bias in human use of automation: an attentional integration. Hum Factors 2010;52:381–410. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1177/0018720810376055&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=21077562&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000283322800003&link_type=ISI) 77. 77.Lu J, Sattler A, Wang S, et al. Considerations in the reliability and fairness audits of predictive models for advance care planning. Front Digit Health 2022;4:943768. 78. 78.Irvin JA, Kondrich AA, Ko M, et al. Incorporating machine learning and social determinants of health indicators into prospective risk adjustment for health plan payments. BMC Public Health 2020;20:608. 79. 79.DeCamp M, Lindvall C. Latent bias and the implementation of artificial intelligence in medicine. J Am Med Inform Assoc 2020;27:2020–2023. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1093/jamia/ocaa094&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 80. 80.Minssen T, Vayena E, Cohen IG. The challenges for regulating medical use of ChatGPT and other large language models. JAMA 2023;330:315–316. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.2023.9651&link_type=DOI) 81. 81.Yang Z, Silcox C, Sendak M, et al. Advancing primary care with artificial intelligence and machine learning. Healthcare 2022;10:100594. 82. 82.Ellis GF. Top-down causation and emergence: some comments on mechanisms. Interface Focus 2012;2:126–140. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1098/rsfs.2011.0062&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23386967&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 83. 83.Goldhahn J, Rampton V, Spinas GA. Could artificial intelligence make doctors obsolete? BMJ 2018;363:k4563. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjE3OiIzNjMvbm92MDdfNC9rNDU2MyI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMzcvMi8zMzIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 84. 84.Sulmasy DP. Advance care planning and “The Love Song of J. Alfred Prufrock”. JAMA Intern Med 2020;180:813–814. ## References 1. 1.Cilliers P. What can we learn from a theory of complexity? Emergence 2000;2:23–33. 2. 2.Polanyi M. *The Tacit Dimension*. Routledge & Kegan Paul; 1996. 3. 3.Gray B. The Cynefin framework: applying an understanding of complexity to medicine. J Prim Health Care 2017;9:258–261. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=http://www.n&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 4. 4.Kung TH, Cheatham M, Medenilla A, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health 2023;2:e0000198. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1371/JOURNAL.PDIG.0000198&link_type=DOI) 5. 5.van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: five priorities for research. Nature 2023;614:224–226. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1038/d41586-023-00288-7&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=36737653&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 6. 6.Sturmberg JP, Martin CM. Knowing - in Medicine. Evaluation Clinical Practice 2008;14:767–770. 7. 7.Brooks HL, Lovell K, Bee P, Sanders C, Rogers A. Is it time to abandon care planning in mental health services? A qualitative study exploring the views of professionals, service users and carers. Health Expect 2018;21:597–605. 8. 8.Martin CM, Sturmberg JP. General practice–chaos, complexity and innovation. Med J Aust 2005;183:106–109. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=16022628&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) 9. 9.Khayut B, Fabri L, Avikhana M. Modeling of Intelligent System Thinking in Complex Adaptive Systems. Procedia Computer Science 2014;36. 10. 10.Forrester JW. System Dynamics: the Foundation under Systems Thinking. The Sloan School of Management 1999. 11. 11.Khayut B, Fabri L, Avikhana M. *Advance Trends in Soft Computing: Proceedings of WCSC 2013, December 16-18, San Antonio, Texas, USA*. vol 312. Studies in Fuzziness and Soft Computing. Springer; 2014. 12. 12.Bassett DS, Gazzaniga MS. Understanding complexity in the human brain. Trends Cogn Sci 2011;15:200–209. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.tics.2011.03.006&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=21497128&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000291135800006&link_type=ISI) 13. 13.Barrett LF. Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. Am Psychol 2022;77:894–920. 14. 14.Kleckner IR, Zhang J, Touroutoglou A, et al. Evidence for a Large-Scale Brain System Supporting Allostasis and Interoception in Humans. Nat Hum Behav 2017;1. 15. 15.Craig AD. Interoception: the sense of the physiological condition of the body. Curr Opin Neurobiol.2003;13:500–5. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/S0959-4388(03)00090-4&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=12965300&link_type=MED&atom=%2Fjabfp%2F37%2F2%2F332.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000185403000016&link_type=ISI) 16. 16.1. Martin CM, 2. Sturmberg JP Dervin B. Clear…unclear? Accurate…inaccurate? Objective…subjective? Research…practice? Why polarities impede the research, practice, and design of information systems and how Sense-Making Methodology attempts to bridge the gaps. Part 2. J Eval Clin Pract 2010;16: in Martin CM, Sturmberg JP (eds) Forum on Systems and Complexity in Medicine and Healthcare. in press. 17. 17.Peper A. A general theory of consciousness I: Consciousness and adaptation. Commun Integr Biol 2020;13:6–21. 18. 18.Luppi AI, Mediano PAM, Rosas FE, et al. A synergistic core for human brain evolution and cognition. Nat Neurosci 2022;25:771–782. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1038/s41593-022-01070-0&link_type=DOI)