Abstract
Purpose: Though a growing crop of health care reforms aims to encourage health care-based social screening, no literature has synthesized existing social screening implementation research to inform screening practice and policymaking.
Methods: Systematic scoping review of peer-reviewed literature on social screening implementation published 1/1/2011–2/17/2022. We applied a 2-concept search (health care-based screening; social risk factors) to PubMed and Embase. Studies had to explore the implementation of health care-based multi-domain social screening and describe 1+ outcome related to the reach, adoption, implementation, and/or maintenance of screening. Two reviewers extracted data related to key study elements, including sample, setting, and implementation outcomes.
Results: Forty-two articles met inclusion criteria. Reach (n = 7): We found differences in screening rates by patient race/ethnicity; findings varied across studies. Patients who preferred Spanish had lower screening rates than English-preferring patients. Adoption (n = 13): Workforce education and dedicated quality improvement projects increased screening adoption. Implementation (n = 32): Time was the most cited barrier to screening; administration time differed by tool/workforce/modality. Use of standardized screening tools/workflows improved screening integration. Use of community health workers and/or technology improved risk disclosure and facilitated screening in resource-limited settings. Maintenance (n = 1): Only 1 study reported on maintenance; results showed a drop in screening over 21 months.
Conclusions: Critical evidence gaps in social screening implementation persist. These include gaps in knowledge about effective strategies for integrating social screening into clinical workflows and ways to maximize screening equity. Future research should leverage the rapidly increasing number of screening initiatives to elevate and scale best practices.
- Implementation Science
- Screening
- Social Determinants of Health
- Social Risk Factors
- Scoping Review
- Systematic Review
Introduction
Based on the recognition that socioeconomic adversity influences health outcomes,1 health care settings in the United States (US) are increasingly screening patients for nonmedical drivers of health, or social factors, such as food security and housing stability.2,3 These efforts are likely to continue expanding as national health plan and hospital quality measures on social screening are adopted in 2023.4⇓–6 Social screening initiatives are anticipated to increase health care teams’ awareness of social risks that adversely impact health and consequently to inform efforts to decrease social risk or otherwise accommodate risks in ways that will improve health outcomes and health equity.7⇓–9
We undertook this systematic scoping review to assess the existing evidence on the implementation of social screening, including the workforce and workflows used to implement screening in different settings and populations, as well as the comparative impacts of different approaches on screening reach, adoption, implementation, and maintenance/sustainability.10⇓–12 Given that social screening has been championed as 1 component of multidimensional efforts to improve health equity,7⇓–9 we were especially interested in how different integration strategies affected implementation outcomes in diverse populations, including populations identifying with minoritized racial/ethnic groups and endorsing non-English language preferences.13,14
Methods
This systematic scoping review on implementation outcomes was developed as part of a larger report on health care-based social screening that also explored (1) the prevalence of social screening; (2) the properties of social screening tools; (3) patient/caregiver perspectives on screening; and (4) provider perspectives on screening.2 In consultation with an experienced medical librarian, we outlined our review protocol and developed a 2-concept search reflecting both health care-based screening practices and specific social risk factors to find relevant articles. This search strategy was based on a 2019 systematic review of the peer-reviewed literature on the psychometric and pragmatic properties of social screening tools that identified research on screening for social risk factors.15 We adapted the search for PubMed and Embase databases. See Appendix 1 for additional search information.
To be included as part of the overarching scoping review, articles had to 1) involve multi-domain social screening (ie, screen for 2 of more domains related to socioeconomic circumstances, such as housing stability, food security, transportation access, utilities security, or financial strain); 2) be based in a US health care setting; 3) be an original research study published in the academic peer-reviewed literature between 1/1/2011-2/17/2022. Our focus was on multi-domain screening given the interdependence of screening domains16,17 and national policy measures/professional society recommendations on multi-domain screening.18⇓⇓⇓⇓–23 To be included in the implementation outcomes review, specifically, studies also had to describe 1 or more outcomes related to screening reach, adoption, implementation, and/or maintenance of screening practices, based on the implementation science RE-AIM framework.12,24
The RE-AIM framework consists of 5 core domains for assessing implementation outcomes: reach, effectiveness, adoption, implementation, and maintenance.12 These domains have evolved over time to more explicitly focus on the equity and sustainability of interventions.12,24 Reach outcomes relate to the number or proportion of individuals who participated in an intervention; they generally are used to inform implementation strategies that increase access to evidence-based interventions. Reach equity outcomes may evaluate differences in who received an intervention based on demographic characteristics.12 Effectiveness outcomes characterize intervention impacts on a range of more downstream outcomes, which can include but are not limited to participant acceptability as well as participant health, wellbeing, and quality of life. Effectiveness equity outcomes explore the differential impacts of an intervention on participant subpopulations.12 Adoption outcomes focus on the number or proportion of settings or individuals delivering an intervention that participated in an intervention. Whereas reach focuses on the intervention’s target population (eg, patients completing screening questionnaires), adoption focuses on the settings and populations tasked with delivering the intervention (eg, staff providing screening questionnaires). Adoption mediators can include organizational-, setting-, and individual-level characteristics that influence whether an intervention was delivered and also can inform the development of strategies to increase equity in intervention uptake.12 Implementation outcomes reflect multiple aspects of how a particular intervention was delivered, including whether it was delivered as intended (fidelity); whether/what changes were made to adjust for implementing the intervention in different settings/populations (adaptation), which is particularly relevant to implementation equity; and intervention-related costs (including staff time and direct financial costs).12 Maintenance outcomes assess whether/how well an intervention is sustained over time, including at the individual- and organizational-level. Maintenance equity outcomes might assess what policy-, community-, organizational-, and individual-level factors contribute to the long-term sustainability of an intervention in different settings/populations.12
In this scoping review, we included articles describing screening reach only if they described comparative reach, that is, compared the reach of screening before or after an intervention or compared screening reach in different patient populations, rather than just described reach resulting from 1 implementation approach. We also did not include studies on effectiveness in this review (eg, studies examining the impact of screening on social risk, health/wellness, health care utilization/cost) because these studies did not distinguish between the impacts of screening alone versus screening coupled with related interventions to assist with identified social risks, and their focus was on evaluating the assistance interventions, not the screening itself. Previous reviews have reported on other markers of social screening effectiveness, specifically the acceptability of screening for patients/caregivers and health care teams.12,24⇓–26 Findings related to acceptability research outcomes are covered in separate publications and not described here.2,25,26 Table 1 provides additional information about how we applied these RE-AIM outcomes. Additional exclusion criteria included: 1) irretrievable full text or 2) insufficient information about screening implementation approach/outcomes.
Search results were uploaded to the systematic review platform, Covidence, and duplicates were removed.27 The original search was conducted on 8/8/2021. Additional articles were uploaded to Covidence through 2/17/2022, based on a weekly PubMed alert created using our 8/8/2021 search and by expert referral.28,29 Two reviewers from the study team (E.H.D., B.A., E.M.B., V.L., M.F.M., L.M.G.) independently reviewed each title/abstract to determine if the study met study inclusion criteria. The team met weekly to discuss and resolve discrepancies. When we reached more than 90% agreement between reviewers, the remainder of title/abstracts were reviewed by only 1 study team member. Each study selected from the title/abstract review for full text review was then reviewed by 2 of 4 reviewers (E.H.D., B.A., E.M.B., V.L.). When we again reached more than 90% agreement about inclusion, only 1 study team member assessed each of the remaining studies for inclusion based on the group’s definitions. Included articles containing information about implementation outcomes were flagged by the initial reviewers. After all full-text articles were reviewed, 2 reviewers (E.H.D., B.A.) extracted data from the articles describing implementation outcomes into a templated spreadsheet that was developed and tested by the study team before use. Extracted data included: study design, study sample (size and demographics), health care setting, type of data (qualitative/quantitative/mixed methods), and study outcomes related to each of the relevant implementation outcomes (reach, adoption, implementation, maintenance). The review followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews guidelines.30
Results
Our initial search yielded 6777 unique articles about social screening in health care settings; 363 of these articles were flagged for full-text review. Forty-two articles met all inclusion criteria for this implementation outcomes scoping review; all were unique implementation studies. (Figure 1). Six of 42 studies (14.3%) used experimental designs; the remainder were descriptive (85.7%). Twenty studies used quantitative data analysis (47.6%); 22 applied mixed/qualitative methods (52.4%). Ten articles included data from both health care clinicians/staff and patients (23.8%); 18 included patients only (42.9%) and 12 focused on clinicians/staff (28.6%). Study sample sizes ranged from 5 to 694 (median: 25.5) for studies including clinician/staff, and 7 to 100,097 (median, 588.5) for studies including patients. Twenty-nine studies (69.0%) took place in primary care settings; among them, 17 (59%) were in pediatric primary care settings. See Table 2 for a summary of these and additional study characteristics. See Table 3 for a summary of type of study data used, patient population, and setting, by RE-AIM domain.
Reach
Seven studies looked at reach-related outcomes (21.4%); none used experimental designs (see Table 3 and 4).31⇓⇓⇓⇓⇓–37 All 7 used quantitative data; 2 (28.6%) included mixed methods.35,36 Six of these descriptive studies (85.7%) compared the absolute number or proportion of patients screened across different settings/populations;31⇓⇓⇓⇓–36 1 study compared screening reach pre/post a workflow change.37 No studies included a comparison group. A single retrospective chart review looked at how the type of workforce available to support resource referrals influenced screening reach.34 The authors found that patients (both adult and pediatric) were screened at higher rates at primary care practices with a community-health worker (CHW) dedicated to social service support (28.8%) relative to practices without CHWs (15.3%) or CHWs not dedicated to social service support (12.7%).34 A separate study in both adult and pediatric patients looked at how modality of screening affected reach by comparing screening rates in the emergency department (ED) setting when screening was conducted by staff who entered the patient room versus via a phone call into the patient room (a change made in response to the COVID-19 pandemic).37 Rates were not significantly different between the 2 screening modes.37
All the articles describing reach-related outcomes reported on the race/ethnicity of the study setting population (see Table 2). Five studies (71.4%) explored variations in screening reach by patient race and/or ethnicity; reported differences were inconsistent. 31⇓–33,36,37 For example, a study describing social screening in more than 100 community health centers (CHCs) found lower rates of screening in non-Hispanic White and Hispanic patients but higher rates of screening in non-Hispanic Black patients, as compared with the proportion of these groups in the overall patient population.33 In contrast, a study in 1 academic primary care clinic found that Black patients were under-represented among screened patients and White patients were over-represented.32 Two of the 5 studies that explored differences by race/ethnicity in screening reach also explored differences by language; both found lower rates of screening among patients who preferred to speak Spanish.33,36
Adoption
Thirteen studies reported screening adoption rates by clinical team members (33.3%) (Table 3 and 4).34,38⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓–49 Four of these articles (30.8%) involved experimental designs (postintervention with a nonrandomized comparison group40 [the only study on adoption with a comparison group] and pre-/postdesigns43,44,46). Eight of 13 studies (61.5%) exclusively used quantitative data.34,38,39,42,46⇓⇓–49 (See Table 3) 10 studies (76.9%) indirectly evaluated the number/proportion of clinicians/staff who conducted screening by analyzing the number/proportion of patient notes with documented screening results or number of completed screens;34,38⇓⇓⇓–42,44⇓–46,48 the remaining 3 studies more directly assessed adoption outcomes. One observed pediatric resident physicians during clinical encounters43 and 2 surveyed pediatric clinicians about screening practices.47,49 Ten of the 13 articles in this group looked at screening adoption specifically among clinicians (eg, physicians, nurse practitioners);34,38,40,41,43,44,46⇓⇓–49 the others included screening practices among other health care staff (eg, medical assistants [MA], registered nurses [RN]).39,42,45 Substantial heterogeneity in implementation approaches (eg, who conducted the screening and how) and in study methodology (eg, how adoption was measured) makes it difficult to synthesize findings across studies. All 4 of the articles using an experimental design reported an increase in screening adoption after clinician education/training around screening,40,43,44,46 3 of which targeted pediatric resident physicians.40,43,44 Two additional descriptive studies in pediatric settings reported an increase in screening adoption after continuous quality improvement interventions (eg, plan-do-study-act cycles).42,45
Implementation
Thirty-two articles included information on screening implementation outcomes, including barriers/facilitators to screening, screening fidelity (whether the screening was implemented as intended) and adaptations (how screening implementation was changed), time to screen (both to administer screening and/or for patients to complete), workforce and modality for screening, and screening costs (76.2%) (Table 3 and 4).32,35,36,39,41⇓–43,45,47,50⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓⇓–72 As described in sections above, some of these articles also looked at how reach and/or adoption varied by different implementation approaches. Three articles (9.4%) used experimental designs (randomized trials60,62 [the only 2 articles on implementation outcomes that used a comparison group] and pre-/postdesign43). Across the 32 articles, 13 (40.6%) exclusively used quantitative data32,39,42,47,53⇓–55,57,60⇓–62,64,72 versus 9 (28.1%) exclusively used qualitative data.52,56,63,65⇓⇓⇓⇓–70 (See Table 3) Common facilitators to screening included: consistent communication about screening progress and processes with the health care team,32,41,45,68 clear introduction and framing of the screening rationale and processes with patients/caregivers,52,68 and training health care teams on empathic inquiry and trauma-informed care.65,68 Commonly cited barriers to implementation included staffing availability and time.45,47,57,63,65,68,69,72
Sixteen descriptive studies (50.0%) commented on screening implementation fidelity and adaptations.35,36,41,45,50,51,53,56,57,63,64,66,70⇓–72 Two of them reported that frontline ED staff used their own judgment or “intuition” to determine when/whom to screen.50,51 Fourteen additional studies mentioned that implementation adaptations were made (eg, who could conduct screening was broadened, changes were made to the screening tool, clinics standardized the introduction to the screening tool41,63,71), but lacked details on what was changed, why, or what effect the modifications had on screening implementation outcomes. Five descriptive studies reported that having a standardized process for screening helped to normalize screening for patients and improved integration in the clinical workflow.41,52,63,68,72
Five of the 32 studies described the time it took to conduct screening (15.6%).39,43,53,59,60 Two studies using an experimental design to increase screening in pediatric resident physicians; both reported that screening typically added less than 2 to 5 minutes to visits.43,60 One descriptive study compared time to complete screening by modality, reporting that on average it took just more than 9 minutes for adult ED patients to self-administer a screening tool by Chatbot versus less than 7 minutes when screening was completed as an online survey.59 The Chatbot was preferred by patients with low literacy and reduced ED personnel time.59
Although some studies provided information about who conducted screening, only 1 descriptive study directly compared the impact of different screening workforces on a nonreach or nonadoption implementation outcome.61 This study was based in an obstetrics clinic and found that patients were more likely to disclose social risks when screened by CHWs versus RNs.61
Two studies looked a modality of screening. Both were based in EDs and compared the influence of different screening modality on disclosure rates or experience of care.59,62 One randomized trial in a pediatric ED found that tablet-based screening had higher social risk disclosure rates compared with face-to-face screening;62 the other descriptive study found that the aforementioned Chatbot improved screening experience in adult patients with low literacy who needed additional assistance completing screening.59
Two of 32 studies described the financial costs of screening (6.3%).61,66 One estimated costs calculated based on qualitative interviews with CHC leaders and found that cost varied considerably by workforce involved in screening program planning, training, development, and implementation.66 A second study (the aforementioned obstetrics-based study) reported that it was less expensive to have CHWs conduct social screening than RNs.61
Maintenance
Only 1 of the 42 studies in the review reported screening maintenance outcomes (2.4%) (Table 3 and 4). This study used an experimental pre-/postdesign without a comparison group. Over the 21-month period after a social screening educational intervention, the study found a significant drop in pediatric residents’ screening rates of hospitalized patients.40 The median duration of continued screening was 8 months.40
Discussion
Based on the recent growth in both interest and activity around social screening in the US health care sector,2⇓–4 it is an important time to examine and identify evidence gaps related to screening implementation. In this systematic scoping review, we found 42 articles that described outcomes related to screening implementation. Despite the number of studies, the evidence on implementation does not yet clearly indicate which approaches to screening are most feasible and sustainable in busy clinical settings. This is in part because the existing studies were primarily cross-sectional, descriptive, and involved small sample sizes. No articles on reach or maintenance included a comparison group. Only 1 article on adoption and 2 articles on implementation used a comparison group.40,60,62 These design limitations, along with the variability in implementation approaches across studies, limit the generalizability of findings. We can nonetheless use this synthesis to highlight topics where future research is most needed to fill outstanding knowledge gaps.
There is markedly little evidence in the existing literature on screening equity, including unanswered questions around how different implementation strategies affect implementation outcomes in different populations and settings. This is striking given the assumption that social screening initiatives are anticipated to increase health care teams’ awareness of patient social risks that adversely impact health and consequently to lead to activities to decrease social risk or otherwise accommodate risks in ways that will improve health outcomes and health equity.7⇓–9 In the 5 studies that evaluated screening by patient demographic characteristics like race, ethnicity, and language, screening rates differed by race/ethnicity but no consistent patterns were identified,31⇓–33,36,37 and screening was lower among patients who preferred to speak Spanish.33,36 In addition, interviews from 2 ED-based studies reported frontline staff may be influenced by their own biases in determining which patients to screen.50,51 Prior research has demonstrated how provider bias can negatively impact the delivery of care and health outcomes.73 Because screening is often linked with interventions, similar practices in social screening have the potential to worsen disparities. No studies in this review explicitly examined strategies to improve rates of screening across different race/ethnic/language groups.
The most frequently cited implementation barrier to screening was time, though administration time differed by tool and screening modality. Studies reported a wide range of time required for screening (1 to 9 minutes). Even at the low end, the additive effects of these screenings across a clinic day and/or in conjunction with multiple other screening intake requirements could be substantial for clinical team-administered surveys. There were few data on if/how time for screening differs across diverse patient populations (eg, populations who prefer a non-English language) or how to reduce the burden of screening administration time across patient populations, though 1 descriptive study suggested that device-assisted screening may reduce screening burden in patients with low literacy.59
The lack of research on screening time relates to the overall inadequate psychometric and pragmatic property testing of available screening tools.2,15 Although social screening tools are frequently referred to as “validated” in the literature, none that we are aware of meet gold standard tool development/testing standards.2,15 Given the recognition that time is a frequent barrier to screening, some health care systems are experimenting with a single-item prescreener.74 Additional work is needed to compare both the validity/reliability of and the implementation of different length tools, including the possibility of a single-item screener.
We found very limited evidence on implementation design elements, for example, screening workforce or screening modality, that can improve reach, adoption, or maintenance of screening. For instance, although few studies included CHWs, 3 descriptive studies suggested that including CHWs can positively influence screening reach, patient risk disclosure, and cost efficiency.41,61,63 Other studies, including 1 randomized trial, indicated that new technologies, for example, digital device-assisted screening, also might have a positive role in facilitating social screening reach and risk disclosure.59,62 Both screening workforce and screening modality should be the subject of future rigorous and comparative effectiveness evaluations. These types of studies should be conducted in settings serving diverse patient populations and with limited staff capacity, where more support is essential to achieving equitable screening implementation and improving patient experiences with screening.
Finally, several studies, including those using experimental designs,40,43,44,46 suggest that health professional education/training and continuous quality improvement projects can positively impact clinician/staff adoption of screening practices. These findings are consistent with results of a prior systematic review that found health professional education/training can positively impact provider perspectives and behaviors related to social screening.26 Existing social screening implementation guidelines and best practice recommendations include an emphasis on continuous workforce education/training.32,75
Limitations
This review should be interpreted considering its limitations. First, this is a systematic scoping review, which by design is intended to be a preliminary assessment of the evidence on a topic.76 A scoping review was appropriate given that this is the first attempt to evaluate the evidence on social screening implementation76 and the rapid recent growth in both social screening and related research. Due to study heterogeneity, findings were challenging to synthesize and may not be generalizable. Second, the review was limited to peer-reviewed studies published in academic journals between January 2011 and February 2022. It is possible that we missed relevant gray literature in our review and/or that relevant research has been published since then. Third, this review focused on screening, not health care teams’ response to identified social risks, which can include using sharing decision making to adjust medical care and connecting patients with resources (ie, assistance). This was by design, to concentrate our scoping review on 1 aspect of social and medical care integration. We would like to acknowledge, however, that there are many reasons to screen individual patients for social risks.9 We focused on screening reach, adoption, implementation, and maintenance as a first step in understanding the potential impacts of social screening interventions.
Conclusion
Despite an increasing number of efforts to integrate social screening into the delivery of health care, few studies have compared different approaches to sufficiently guide best implementation practices, for example, practices that maximize screening reach, adoption, and maintenance in different clinical settings. Many opportunities exist to improve implementation research in this area. These should begin by surfacing facilitators and barriers to screening efforts and move on to comparing different implementation strategies, including how different strategies may affect populations experiencing socioeconomic marginalization, racism and discrimination, and other structural/systematic barriers to health that may benefit most from social interventions. As payers and health care systems contemplate quality metrics for social screening, elevating what works/does not work and for whom can help to avert unintended harms of future social screening efforts.
Acknowledgments
The authors thank Hugh Alderwick, PhD, and Becks Fisher, MD, for their insights and contributions to this review. This review was conducted with support from the Robert Wood Johnson Foundation. The findings presented are those of the authors and do not represent the official position of the Robert Wood Johnson Foundation. A simplified version of this review was included as part of a publicly available report published online June 23rd, 2022:
De Marchis EH, Brown E, Aceves B, Loomba V, Molina M, Cartier Y, Wing H, Gottlieb LM. State of the Science on Social Screening in Healthcare Settings. 2022. San Francisco, CA: Social Interventions Research and Evaluation Network. Available online: https://sirenetwork.ucsf.edu/tools-resources/resources/state-science-social-screening-healthcare-settings#Full-Report.
Study authors have full access to and control of study data.
Appendix. Database search details
PubMed search strategy (n = 6157)
Concept 1: Health care-based screening
(“survey”[tiab] OR “questionnaire”[tiab] OR “measurement”[tiab] OR “instrument”[tiab] OR “screen*”[tiab] OR “tool”[tiab])
AND
Concept 2: Social risk factors
(“Social Conditions”[tiab] OR “social risk*”[tiab] OR “SDOH”[tiab] OR “determinants of health”[tiab] OR “structural determinant*”[tiab] OR “social factor*”[tiab] OR “behavioral determinant*”[tiab] OR “social determinant*”[tiab] OR “social need*”[tiab] OR “basic needs”[tiab] OR “basic need”[tiab])
AND
(“English”[Language] AND 2011/01/01:2021/08/08[Date - Publication])
Embase search strategy (n = 4564)
Concept 1: Health care-based screening
(‘survey':ab,ti OR ‘questionnaire':ab,ti OR ‘measurement':ab,ti OR 'instrument':ab,ti OR ‘screen*':ab,ti OR ‘tool':ab,ti)
AND
Concept 2: Social risk factors
(‘Social Conditions':ab,ti OR ‘social risk*':ab,ti OR ‘SDOH':ab,ti OR ‘determinants of health':ab,ti OR ‘structural determinant*':ab,ti OR ‘social factor*':ab,ti OR ‘behavioral determinant*':ab,ti OR ‘social determinant*':ab,ti OR ‘social need*':ab,ti OR ‘basic needs':ab,ti OR ‘basic need':ab,ti)
AND
[english]/lim AND [1-1-2011]/S.D. NOT [08 to 09-2021]/S.D. AND ([embase]/lim OR [medline]/lim)
Notes
This article was externally peer reviewed.
Funding: This work was supported by a grant from the Robert Wood Johnson Foundation.
Conflict of interest: None.
To see this article online, please go to: http://jabfm.org/content/36/4/626.full.
- Received for publication November 28, 2022.
- Revision received April 5, 2023.
- Accepted for publication April 10, 2023.