Radiographs in the Office: Is a Second Reading Always Needed? ============================================================= * Paul D. Smith * Jonathan Temte * John W. Beasley * Marlon Mundt ## Abstract *Background:* We evaluated the frequency, nature, and importance of the changes in patient care that occurred as a result of differences in outpatient radiograph readings for cases in which the primary care clinician, hypothetically, would not request a second reading by a radiologist. *Methods:* During 4 months, 1393 pairs of radiographic readings were collected from 9 volunteer primary care practices with 86 clinicians and a second reading by one of 42 radiologists. For 553, hypothetically, the clinician would not request a consultation from a radiologist. Review analysis of the 553 pairs revealed 100 (18.1%) radiographs in which the radiologist’s reading did not agree with the clinician’s reading. Data from the original visit and subsequent related care were abstracted from patient charts and reviewed. Changes in clinical care resulting from the radiologist’s reading were identified. *Results:* The radiologists’ second reading of these 553 sets of radiographs resulted in 14 (2.5% of 553 and 14% of 100) cases of one or more changes in care. We found 38 documented or presumed changes in care and zero substantial changes in care. *Conclusions:* Primary care clinicians are able to identify radiographs for which a second reading by a radiologist will not result in substantial changes in care. Thirteen percent of general radiology (plain films) in the United States are performed by family and general practitioners.1 Analysis of billing data has shown that 42% to 70% of outpatient radiographs are performed and read by nonradiologists.1–4 These studies addressed costs and which radiologic procedures were performed by radiologists and nonradiologists. However, none of these studies specifically reported the frequency of outpatient radiographs initially read by the primary care clinician and then referred to a radiologist for a second interpretation. Halvorsen and Kunian5 found that 87.3% of Minnesota family physicians had on-site radiographic equipment. Likewise, Smith6 reported that 76% of Wisconsin family physicians had radiographic equipment in their own offices, and 87% had this equipment in the same building. They also reported that up to 54.2% of family physicians select which radiographs to refer for a radiologist’s reading.5,6 Several studies have compared the concordance of primary care physician interpretation of office radiographs with the interpretations of radiologists7–12 and found concordance rates between 87.5% and 93.4%. Studies comparing Emergency Department physicians’ and radiologists’ interpretations of radiographs taken in the Emergency Department have reported concordance rates of 91.8 to 99.3%13–17 A few studies have reported changes in clinical care as a result of discordant readings,7,8,14,18 but these studies are limited by small numbers of cases, and changes in clinical care were not the primary study aim. Many radiographs taken in ambulatory settings are either interpreted by the treating clinician alone or with a second reading by a radiologist. We found little evidence in the literature that the second reading adds information necessary for patient care. The primary aim of this study was not to determine who made the “correct” reading but to determine whether adverse patient outcomes might occur if the primary care clinician did not obtain a radiology consultation. ## Methods A prospective cohort study was conducted from April to July 1997. We gathered consecutive pairs of radiograph readings at 4 urban and 5 rural Wisconsin Research Network practices that volunteered to participate and routinely referred all radiographs for radiology reading. Three sites were family medicine residency training sites and 6 were community practices. During the study period, all child and adult patients for whom outpatient radiographs were ordered by the participating clinicians were eligible. Informed consent was obtained by the clinic radiograph technicians. Patients were not enrolled at times of high clinic volume when there was insufficient time to obtain consent. The unit of analysis was the standard “set” of radiographs obtained for a particular body area. Bilateral radiographs were counted as 2 “sets” of radiographs. The participating clinicians were instructed to interpret their outpatient radiographs in the usual way. This included their usual process of asking for the opinion of a colleague or supervisor before the radiographs were sent for the radiologist’s reading. Resident physicians are expected to review their interpretations with faculty preceptors, although there are occasional instances when this does not occur. The clinicians were instructed to record their final interpretation and answer the hypothetical question, “If it were optional, would radiology over-reading [radiology referral] be requested?” on the data collection instrument before sending the radiograph for the second reading. During the study period, the radiologist’s reports were photocopied; the patient’s was name replaced by a numeric identifier and matched up with the study form with the same identifier. Each of the 3 family physician authors independently reviewed all the pairs of readings for potential clinically important discordance between the readings and coded the pairs as concordant or discordant. The authors coded the pair as discordant when there was any uncertainty. Chart reviews were performed by research assistants for all cases when at least 2 of the 3 physician reviewers coded the interpretation pairs as discordant and the clinician hypothetically would have declined radiology referral. All materials found in the chart related to the body area studied with the radiograph(s) were photocopied starting with the index visit and ending 6 to 12 months later. These included office progress notes, emergency department records, hospital records, consultations, documentation of telephone conversations, and radiography or other testing related to the body area imaged at the index visit. Chart reviews were performed at least 12 months after the index visit. The chart abstracts were then reviewed independently by the same 3 family physicians. Any change in the patient’s care was recorded. We included additional telephone calls as a change in care in that they required additional staff and patient time and potentially caused additional anxiety about abnormal results. We assumed that 2 telephone calls occurred when additional tests occurred as a result of discordance, one to set up the test and one to report the results. We assumed that 3 telephone calls occurred when old radiographs were obtained for comparison, one to determine where the old radiographs were located, one to obtain the old radiographs, and one to report the results of the comparison. A substantial change in care was defined as one likely to cause harm, such as death, permanent disability, or prolonged recovery, if the change had not occurred. Disagreement between reviewers was resolved by consensus. Approximately 1 year after the original chart review, a different research assistant randomly repeated 9% of the chart reviews and the authors repeated their evaluation to check for consistency with the process. Descriptive analysis summarized radiograph type, frequency, and hypothetical choice to refer by type. Inter-rater reliability of discordance between pairs of readings was measured using the κ statistic.19 For each body area, the χ2 test was used to test for significant difference in proportion of agreement between radiographs when referral would have been hypothetically requested or declined. ## Results A total of 1530 patients had radiographs and interpretations (Figure 1) Seventeen community family physicians, 3 community surgeons, 16 University of Wisconsin faculty family physicians, 36 University of Wisconsin family practice residents, 5 nurse practitioners, and 9 physician assistants participated in the study. A few initial readings by nonprimary care clinicians were inadvertently included in the data submitted and because we did not collect specialty, degree, or level of training data, it was impossible to remove those readings or analyze those factors. Forty-two radiologists from 3 institutions did at least one reading. ![Figure 1.](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/17/4/256/F1.medium.gif) [Figure 1.](http://www.jabfm.org/content/17/4/256/F1) Figure 1. Study results. Less than 1% of patients refused participation. The frequency of enrollment varied by site and tended to drop off as the study progressed. Based on the assumption that enrollment was near 100% initially and on the level of drop-off over time, we estimate an average enrollment of 70% of eligible patients. One thousand, three hundred and ninety-three (91%) cases had both readings and an answer to the hypothetical question. Of those 1393 cases, 553 (39.7%) were cases that the clinician hypothetically would have declined referral to a radiologist for a second reading. The agreement between reviewers coding concordance or discordance for the 1393 cases ranged from κ = 0.55 to 0.60. The overall agreement between any 2 reviewers was 69.4%. For those radiographs that were hypothetically not referred for radiologist reading and classified as discordant by a least 2 reviewers, 100 of 553 (18.1%) had potential clinically important discordance and chart review was performed. After adjusting for differences in types of radiographs being performed and clinician desire for radiology consultation, logistic regression analysis showed one clinic with significantly more discordance between primary care clinician and radiologist (49% versus 28% for the other clinics; odds ratio, 2.07; *P* < .01). Chest radiographs were the most common radiographs, comprising 29.4% of the total (Table 1). Lower and upper extremity were the next most frequent category, comprising 27.7% and 23.9%, respectively. The frequency at which primary care clinicians would have hypothetically declined radiology referral varied from 50.8% for upper extremity radiograph to 27.1% for chest radiographs. View this table: [Table 1.](http://www.jabfm.org/content/17/4/256/T1) Table 1. Description of Radiographs Obtained by Primary Care Clinicians and Those for Which Radiologist Referral Reading Was Hypothetically Declined The concordance between the readings of the primary care clinician and the radiologist for all radiographs was 1010 of 1393 (72.5%) (Table 2). The concordance of radiograph reading for each body area ranged from 80.2% for upper extremity to 58.2% for abdomen. When radiology referral would hypothetically have been requested, concordance for readings ranged from 52.9% for abdominal radiographs to 72.9% for upper and lower extremity radiographs. Concordance for all radiograph readings when radiology referral would have been requested was 66.3%. A higher frequency of concordance was noted when radiology referral would have been declined for all body areas. Concordance varied from 62.5% for spine radiographs to 91.7% of face and head radiographs. Concordance rate for all radiograph readings when radiology referral would have been declined was 81.9%. View this table: [Table 2.](http://www.jabfm.org/content/17/4/256/T2) Table 2. Percentage Concordance between Primary Care Clinician and Radiologist Readings, Referral Hypothetically Requested or Declined There were 100 of 553 (18.1%) cases in which radiology referral would have been declined and in which the readings were discordant (Figure 1). A change in clinical care occurred in 14 of 100 (14%) cases. All cases involved primary care clinicians: 2 physician assistant cases, 3 community family physician cases, 4 resident/faculty family physician cases, and 5 faculty family physician cases. There were 5 cases of definite or possible acute fracture, one case of stress fracture, 5 cases of definite or possible lung nodule, one case of possible mild acromioclavicular subluxation, one case of lumbar spondylolisthesis, and one case of possible pneumonia. There were 4 episodes of discordance related to presence or possible presence of fractures with no follow up visits documented in the chart and unknown outcome (Table 3). View this table: [Table 3.](http://www.jabfm.org/content/17/4/256/T3) Table 3. Changes in Clinical Care and Outcomes We found 38 documented changes or presumed changes in care (Table 3). The changes included: 23 documented or presumed telephone calls; 5 sets of additional follow-up radiographs; 2 additional office visits; 2 instances in which copies of old comparison radiographs were obtained; 2 repeat radiographs; one cast application; one new prescription; one excuse for gym class; one “possible CT scan” was never scheduled. More than one change in care occurred for some cases. We found zero substantial changes in care or episodes of averted patient harm. Repeat medical record review of 9 randomly selected cases of the 100 showed complete agreement with the initial review for 8 cases. We discovered documentation of telephone follow-up for one case in which no follow up was found on initial review. This additional case with a change of care was included in the total number reported above. ## Discussion Use of radiography in the evaluation of ambulatory patients is a common and accepted practice in the United States. Many radiographs taken in primary care clinicians’ offices are read a second time by a radiologist,4 but the exact frequency of referral is unknown. Our study’s overall concordance rate of 72.5% for radiograph readings was somewhat lower than the 87.5% to 92.4% rates in other published studies.8–11,18 The lower concordance would be expected in that our study counted minor discrepancies as discordant to avoid missing any changes in clinical care, although it is difficult to determine whether previous studies counted minor discrepancies or not. Even with our increased sensitivity for calling a pair of readings discordant, the range of discrepancies for this study is not substantially different compared with the reported 9.9% to 59% range of discordance among radiologists for radiograph readings.20–27 Bergus et al10 carefully evaluated discordant readings between family physicians and radiologists and found that 35.2% were “interpreted correctly by the family physician.” Our primary aim for this study was not to determine who had the correct reading but rather to determine the effect of the second reading by the radiologist on the care of the patient. If we had attempted to determine who was correct, our rate of accuracy might have been higher and closer to other published rates. The proportions of hypothetical referrals indicate that the clinicians in our study may have more confidence in their ability to interpret extremity radiographs than other radiographs, similar to results reported by Halvorsen et al.9 Increased confidence in interpreting extremity radiographs is also suggested from our study’s higher concordance with the radiologist’s readings compared with all types of radiographs, also similar to previous studies.8,10 If we assume that changes in care did not occur with the other 453 cases when radiology referral would hypothetically have been declined but the readings agreed, these 14 cases represent 2.5% (14 of 553) of the total. We found zero substantial changes in care or episodes of averted patient harm. These findings are similar to those of other small studies that attempted to evaluate the effect on patient care of discordance between the primary care and radiologist readers.7,8,14,18 The frequency of changes in care that occurred when referral would have been requested remains a substantial but unanswered question. Time and funding restrictions precluded us from exploring this question, but this would be important to address in a future study. This study has several limitations. This was not a consecutive sample of 100% of the radiograph cases at the participating clinics. We were unable to track the total number of refusals or missed opportunities to enroll patients. However, because there was no systematic method for excluding patients, there is no reason to suspect a selection bias. We were unable to determine the cause of one of 3 family practice teaching sites in which there was an increase in frequency of discordant readings compared with the larger group of clinics. This raised the total frequency of discordances but it did not effect the outcome because none of the 14 cases of change in care occurred at that site. We did not track the frequency of readings according to training status: (ie, resident, attending physician, nurse practitioner, or physician assistant). Halvorson8 reported that faculty family physicians had only a 3.4% increase (85.2% versus 88.6%) in concordance with radiologists’ interpretations compared with resident family physicians. He also reported an additional 3.5% increase in concordance when faculty and residents did collaborative interpretation. With rare exceptions, the residents in this study did collaborative interpretation, so we do not believe resident participation in this study significantly altered the results. Physician assistants interpreted radiographs in only 2 of the 14 cases of changes in care, so a significant influence on the generalizability of the final results seems unlikely. Our study protocol did not include permission to contact patients. This would have allowed confirmation of the outcome derived from the chart review, discovered changes in care that the patient received that were not recorded in their primary clinician’s office records, and avoided some or all the 4 “unknown outcome” possible fractures in this study. A larger study should include consent to contact the patients. The most important limitation is that because the number of discordant readings with changes in care was small, rare events might be missed. Given that zero substantial changes in care or episodes of patient harm occurred, and using Hanley’s method of estimating risk,28 our study showed a 0% to 0.5% chance (95% confidence interval) for these events. Several factors affect the diagnostic interpretation and recommendations that result from reading radiographs, whether by a primary care clinician or by a radiologist. Although having the clinical history can improve detection of radiographic abnormalities,24,29–32 some studies have shown no benefit.26,33 Human variability also has an effect. For example, radiologists reading identical mammograms twice, 5 months between readings, had the same interpretation for only 84% of the cases.34 Another issue is the context bias that is unavoidable for both the primary care clinician and the radiologist. Context bias, as described by Egglin and Feinstein,35 refers to the effect of higher prevalence of disease on interpretation of diagnostic radiographs. Radiographs from a population with a low frequency of disease intermingled with radiographs with a high frequency of disease are more likely to be interpreted as abnormal. Primary care clinicians see a population in their offices with lower risks of most radiologic abnormalities than the hospital and emergency department population that generates the majority of the radiologists’ films to interpret. This difference in disease frequency will influence the radiologist to interpret equivocal findings as abnormal. This context bias, combined with the history and physical examination before radiographs are obtained and again after, if necessary, allows the primary care clinician to dismiss findings that are incidental. With an estimated $38.5 billion spent in the United States on all radiologic studies in 199736 and growing every year, a larger multiregional, multicenter primary care study to assess the value or detriment of the second reading of outpatient radiographs seems warranted. A future study would include methods beyond chart review to determine the final clinical outcome and would be sufficiently powered to discover relatively rare events. In addition to more research on context bias in the ambulatory setting, a larger study could result in guidelines for requesting radiology referral for ambulatory radiographs. This is the first study to specifically address the effect of a second reading of office radiographs on the care of patients. We found little added benefit with a very small frequency of any change in clinical care. The majority of the changes were episodes of unnecessary additional radiologic procedures, administrative effort, or follow up care, without substantive improvement in the clinical care of the patient. We conclude that primary care clinicians are able to identify those radiographs for which a second reading by a radiologist will not result in substantial changes in care or episodes of patient harm. Increased interpretation of office radiographs by family physicians has financial implications but not cost savings. Any physician may charge for the technical component (taking the picture) and interpretation as long as a separate written report is generated that addresses the findings, relevant clinical issues, and comparative data (when available), similar to the method of billing for electrocardiograms. Family physicians charging for more of their interpretations of radiographs will shift reimbursement from radiologists to family physicians, but the cost to the health care system remains the same. Liability concerns may drive some health care organizations to demand that all ambulatory radiographs be read by a radiologist, but the literature does not support such a policy. There is an increasing shortage of radiologists in the United States.37–39 Primary care clinicians selecting which radiographs to send for second reading would free up radiologists’ time for interpretation of more complex radiographs and radiologic interventions. Moreover, up to 45% of rural clinicians do not have daily access to a radiologist,5 and up to 73% of outpatient chest, spine, pelvic, and extremity radiographs are not read a second time by a radiologist.4 This study’s results will give them some measure of confidence that important missed diagnoses are, at the worst, very rare events. Universal second readings are not warranted based on our findings. ## Acknowledgments We acknowledge the important contributions of the Wisconsin Research Network (WReN) clinicians, their radiography technicians and the office staff, who gathered the data. We thank Linda Manwell, RN, Mary Stone for secretarial support, and Pamela Wiesen, MBA, for project support. ## Notes * This work was supported by a $3000 grant from the Department of Family Medicine, University of Wisconsin Medical School, and administrative support was provided by the Wisconsin Research Network. These findings were presented in abstract form at the 1999 Wisconsin Primary Care Research Forum, 13th Annual Wisconsin Research Network Meeting; 199 Oct 1999; Wisconsin Dells, Wisconsin, and at North American Primary Care Research Group, 28th Annual Meeting; 2000 Nov; Amelia Island, Florida. * Received for publication January 4, 2004. * Revision received January 4, 2004. ## References 1. Sunshine JH, Bansal S, Evens RG. Radiology performed by nonradiologists in the United States: who does what? AJR Am J Roentgenol 1993; 161: 419–29. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=8333388&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 2. Sunshine JH, Mabry MR, Bansal S. The volume and cost of radiologic services in the United States: who does what? AJR Am J Roentgenol 1991; 157: 609–13. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=1872247&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1991GB71900033&link_type=ISI) 3. Levin DC. The practice of radiology by nonradiologists: cost, quality and utilization issues. AJR Am J Roentgenol 1994; 162: 513–8. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=8109487&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1994MY86000005&link_type=ISI) 4. Spettell CM, Levin DC, Rao VM, Sunshine JH, Bansal S. Practice patterns of radiologists and nonradiologists: national Medicare data on the performance of chest and skeletal radiography and abdominal and pelvic sonography. AJR Am J Roentgenol 1998; 171: 3–5. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=9648753&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 5. Halvorsen JG, Kunian A. Radiology in family practice: experience in community practice. Fam Med 1988; 20: 112–7. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=3360229&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 6. Smith PD. Office use of X-rays by family physicians in Wisconsin. Wis Med J 1998; 97: 51. 7. Kruitzky L, Haddy RI, Curry RW Sr. Interpretation of chest roentgenograms by primary care physicians. South Med 1987; 80: 1347–51. 8. Halvorsen JG, Kunian A, Gjerdingen D, Connolly J, Koopmeiners M, Cesnik J. The interpretation of office radiographs by family physicians. J Fam Pract 1989; 28: 426–32. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=2703813&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1989U210600010&link_type=ISI) 9. Halvorsen JG, Kunian A. Radiology in family practice: a prospective study of 14 community practices. Fam Med 1990; 22: 112–7. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=2323491&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 10. Bergus GR, Franken EA Jr, Koch TJ, Smith WL, Evans ER, Berbaum KS. Radiologic interpretation by family physicians in an office practice setting. J Fam Pract 1995; 41: 352–6. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=7561708&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 11. Franken EA Jr, Bergus GR, Koch TJ, Berbaum KS, Smith WL. Added value of radiologist consultation to family practitioners in the outpatient setting. Radiology 1995; 197: 759–62. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=7480752&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 12. Strasser RP, Bass MJ, Brennan M. The effect of an on-site radiology facility on radiologic utilization in family practice. J Fam Pract 1987; 24: 619–23. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=3585266&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1987H788000017&link_type=ISI) 13. Mucci B. The selective reporting of x-ray films from the accident and emergency department. Injury 1983; 14: 343–4. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/0020-1383(83)90252-8&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=6852898&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1983QA71000007&link_type=ISI) 14. McLain PL, Kirkwood CR. The quality of emergency room radiograph interpretations. J Fam Pract 1985; 20: 443–8. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=3989484&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 15. O’Leary MR, Smith MS, O’Leary DS, et al. Application of clinical indicators in the emergency department. JAMA 1989; 262: 3444–7. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.1989.03430240080034&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=2585689&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 16. Warren JS, Lara K, Connor PD, Cantrell J, Hahn RG. Correlation of emergency department radiographs: results of a quality assurance review in an urban community hospital setting. J Am Board Fam Pract 1993; 6: 255–9. 17. Preston CA, Marr JJ 3d, Amaraneni KK, Suthar BS. Reduction of “callbacks” to the ED due to discrepancies in plain radiograph interpretation. Am J Emerg Med 1998; 16: 160–2. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/S0735-6757(98)90036-5&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=9517693&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000072496400014&link_type=ISI) 18. Knollmann BC, Corson AP, Twigg HL, Schulman KA. Assessment of joint review of radiologic studies by a primary care physician and a radiologist. J Gen Intern Med 1996; 11: 608–12. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=8945692&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 19. Fleiss JL. Statistical methods for rates and proportions. 2nd ed. New York: John Wiley; 1981. p. 38–46. 20. Herman PG, Gerson DE, Hessel SJ, et al. Disagreements in chest roentgen interpretation. Chest 1975; 68: 278–82. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1378/chest.68.3.278&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=1157530&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1975AP80600003&link_type=ISI) 21. Tuddenham WJ. Visual search, image organization and reader error in roentgen diagnosis Radiology 1962; 78: 694–704. 22. Herman PG, Hessel SJ. Accuracy and its relationship to experience in the interpretations of chest radiographs. Invest Radiol 1975; 1: 62–7. 23. Revesz G, Kundel HK. Psychophysical studies of detection errors in chest radiology. Radiology 1977; 123: 559–62. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1148/123.3.559&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=860023&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1977DG71600004&link_type=ISI) 24. Rhea JT, Potsaid MS, Deluca SA. Errors of interpretation as elicited by a quality audit of an emergency radiology facility. Radiology 1979; 132: 277–80. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1148/132.2.277&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=461779&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1979HE42900003&link_type=ISI) 25. Hessel SJ, Herman PG, Swensson RG. Improving performance by multiple interpretations of chest radiographs: effectiveness and cost. Radiology 1978; 127: 589–94. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1148/127.3.589&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=96485&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1978FA16600003&link_type=ISI) 26. Swensson RG, Hessel SJ, Herman PG. Omissions in radiology: faulty search or stringent reporting criteria? Radiology 1977; 123: 563–7. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1148/123.3.563&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=870932&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1977DG71600005&link_type=ISI) 27. Yerushalmy J. The statistical assessment of the variability in observer perception and description of roentgenographic pulmonary shadows. Radiol Clin North Am 1969; 7: 381–92. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=5382255&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1969E825600002&link_type=ISI) 28. Hanley JA, Lippman-Hand A. If nothing goes wrong, is everything all right? JAMA 1983; 249: 1743–5. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.1983.03330370053031&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=6827763&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1983QH56700020&link_type=ISI) 29. Schreiber MH. The clinical history as a factor in roentgenogram interpretation. JAMA 1963; 185: 399–01. 30. McNeil BJ, Hanley JA, Funkenstein HH, Wallman J. Paired receiver operating characteristic curves and the effect of history on radiographic interpretation: CT of the head as a case study. Radiology 1983; 149: 75–7. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=6611955&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1983RH50300014&link_type=ISI) 31. Potchen EJ, Gard JW, Lazar P, Lahaie P, Andary M. The effect of clinical history data on chest film interpretation: direction or distraction. Invest Radiol 1979; 14: 404. 32. Elmore JG, Wells CK, Howard DH, Feinstein AR. The impact of history on mammographic interpretations [abstract]. JAMA 1997; 277: 49–52. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.1997.03540250057032&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=8980210&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1997VZ76700029&link_type=ISI) 33. Swensson RG, Hessel SJ, Herman PG. The value of searching films without specific preconceptions. Invest Radiol 1985; 20: 100–7. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1097/00004424-198501000-00024&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=3980173&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1985AAM4500021&link_type=ISI) 34. Elmore JG, Wells CK, Lee CH, Howard DH, Feinstein AR. Variability in radiologists’ interpretations of mammograms. N Engl J Med 1994; 331: 1493–9. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1056/NEJM199412013312206&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=7969300&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1994PU86600006&link_type=ISI) 35. Egglin TK, Feinstein AR. Context bias, a problem in diagnostic radiology. JAMA 1996; 276: 1752–55. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.1996.03540210060035&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=8940325&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=A1996VV56600034&link_type=ISI) 36. Smith PD. Family physician interpretation of outpatient radiographs. Radiology (Position Paper). Shawnee Mission (KS): American Academy of Family Physicians. Available at: [http://www.aafp.org/x7036.xml](http://www.aafp.org/x7036.xml) 37. Bhargavan M, Sunshine JH, Schepps B. Too few radiologists? AJR Am J Roentgenol 2002; 178: 1075–82. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=11959704&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000175077200006&link_type=ISI) 38. Sunshine JH, Cypel YS, Schepps B. Diagnostic radiologists in 2000: basic characteristics, practices, and issues related to the radiologist shortage. AJR Am J Roentgenol 2002; 178: 291–301. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=11804882&link_type=MED&atom=%2Fjabfp%2F17%2F4%2F256.atom) 39. Hawkins J. Addressing the shortage of radiologists. Radiol Manage 2001; 23: 26–8.