Abstract
Purpose: To understand staff and health care providers' views on potential use of artificial intelligence (AI)-driven tools to help care for patients within a primary care setting.
Methods: We conducted a qualitative descriptive study using individual semistructured interviews. As part of province-wide Learning Health Organization, Community Health Centres (CHCs) are a community-governed, team-based delivery model providing primary care for people who experience marginalization in Ontario, Canada. CHC health care providers and staff were invited to participate. Interviews were audio-recorded and transcribed verbatim. We performed a thematic analysis using a team approach.
Results: We interviewed 27 participants across 6 CHCs. Participants lacked in-depth knowledge about AI. Trust was essential to acceptance of AI; people need to be receptive to using AI and feel confident that the information is accurate. We identified internal influences of AI acceptance, including ease of use and complementing clinical judgment rather than replacing it. External influences included privacy, liability, and financial considerations. Participants felt AI could improve patient care and help prevent burnout for providers; however, there were concerns about the impact on the patient-provider relationship.
Conclusions: The information gained in this study can be used for future research, development, and integration of AI technology.
Introduction
Research on artificial intelligence (AI) emerged more than 70 years ago1 and has transformed industries such as digital marketing and weather forecasting.2,3 In the past decade, AI for health care research has dramatically increased,4 likely due to greater access to health data through electronic medical records (EMRs) and digital testing.5,6 In primary care, AI research is still at an early stage, with a focus on development of potential AI-driven tools rather than implementation.7⇓–9 There are unique considerations for the use of AI in primary care compared with other health care settings due to the broad scope of care, fewer clearly defined clinical outcomes, and heterogenous populations.10,11 Common uses of AI in primary care are diagnostic and treatment decision support, operational efficiencies, prediction of health care outcomes, and summarizing data.7 Before widespread implementation of AI tools, we need to better understand frontline staff and providers' perceptions on using AI to help deliver optimal primary care. A recent scoping review on perceptions about increasing use of AI in health care identified only a few studies within primary care,12 where some focused on opinions of research and policy stakeholders13,14 and others used a survey design.15,16
We conducted this study to inform an agenda for AI research within a province-wide primary care Learning Health Organization in Ontario, Canada. The objective was to better understand the views of health care providers and staff on potential use of AI-driven tools to care for patients within primary care.
Methods
Study Design and Setting
We conducted a qualitative descriptive study with approval from Western University Research Ethics Board (Project ID: 112799) and followed qualitative research reporting guidelines.17⇓–19 In Ontario, Canada, primary care is publicly funded under the provincial health care plan; however, different reimbursement models exist. Organized through the Alliance for Healthier Communities, Community Health Centres (CHCs) are one of these models. CHCs serve approximately 600,000 people, focusing on community-governed primary care for socially disadvantaged individuals.20 The Alliance established itself as a Learning Health Organization, where analysis of EMR data is used for quality improvement. The Alliance is considering using AI tools to provide decision support for patient care.21
Sampling and Recruitment
Clinical and support staff from all 73 CHCs were eligible to participate. We aimed for maximum variation on role (executive directors, managers, data support staff, and health care providers including family physicians, nurse practitioners, nurses, and social workers) and location (rural and urban). A research team member with the Alliance emailed the executive directors of all CHCs, inviting their teams to participate.
Data Collection
Two research team members conducted individual semistructured interviews between December 2019 and March 2020. Interviews lasted 40 minutes on average and were mostly in-person at the CHCs, with 2 by video conferencing and 1 by telephone. We piloted the interview guide with 3 Alliance staff not in the final sample; revisions were made after the pilot interviews and throughout the study as new themes emerged or clarification was needed.
We described the purpose of the interviews to participants as gathering perspectives on the development and use of AI-driven or predictive analytic tools to guide future work to help improve the early detection and treatment of chronic diseases. We asked questions about barriers and facilitators to integrating and using AI-driven tools to provide decision support in the workplace and about the benefits and risks to using AI for this purpose.
We took field notes and audio-recorded all interviews, which were then transcribed verbatim. One researcher reviewed each transcript for accuracy and uploaded them to NVivo 11 for data management.
Data Analysis
The thematic analysis was iterative and interpretive.22,23 To ensure credibility and confirmability, 2 researchers independently reviewed and coded transcripts, met to discuss and compare coding, and consulted with a third member throughout the analysis to refine the themes.24 We developed a coding template based on the initial analysis, which we revised as new themes emerged. Saturation was achieved when no new themes or subthemes were identified.25,26 Throughout the study, we practiced reflexivity, which is the practice of recognizing our own values and opinions to help prevent these from influencing participants' responses or our interpretation of the findings.27
Results
Study Participants and CHC Characteristics
We interviewed 26 individuals across 6 CHCs, plus 1 Alliance member who was not associated with a CHC. Participants included 10 health care providers, 8 executive directors, 8 data support staff, and 4 managers; 3 individuals had more than 1 role. Participants on average worked 7 years for their CHC. Three CHCs were urban, 2 were rural, and 1 was suburban.
Participants described their clients as mostly low income with many experiencing poverty or homelessness. Some participants indicated their CHC served many newly arrived immigrants and refugees, where others served a large proportion of people with mental health and addiction issues or other complex needs. The CHCs served diverse communities but all included a large proportion of non-English and non-French speakers.
Overview of Study Findings
Participants noted the health care context, particularly the complex patients they serve, needs to be considered when developing and implementing AI technology. An overarching theme was how trust was essential to acceptance of AI, and to ensure trust, people need to feel confident that the information is accurate, where prior negative experiences can deter their acceptance. Another theme was internal and external influences on AI acceptance. Internal influences included ease and efficiency of use and ability to complement, rather than replace, clinical judgment. External influences included privacy and liability considerations, as well as finances to develop the technology. Participants anticipated that AI could have a positive impact on patient care and help prevent provider burnout; however, there were concerns about the potential impact on the patient-provider relationship (Figure 1).
Context of Health Care Setting
The usefulness and applicability of AI within CHCs needs to be considered within the context of the population they serve:
The client populations that are served by CHCs are extremely complex, and you wouldn't want to have an AI algorithm that doesn't take into account all the various social and medical complexities, so if you have an AI thing that says, “Yeah give them these treatments” but they're all treatments that cost a lot of money or really expensive drugs. I think that would be the risk to the clients, and then it would be a risk to the sector because it wouldn't be very useful.—Data Support 6
Knowledge
Most participants lacked a deep understanding of AI and how it might operate in the CHCs: “And I do not know enough about artificial intelligence to give you big ideas of what could be done.”—Health Care Provider (HCP) 9. Participants expressed the importance of frontline staff understanding how AI works before using it in practice: “I would really have to understand and have folks there that also understand how it works or can teach us. So we're not just blindly relying on technology in a setting where we're working with humans.”—Data Support 3.
Foundation of Trust: Accuracy, Experience, and Openness
Trust in AI was the resounding theme among participants, with the main concern being accuracy of the AI: “I think it would have to be run as informative but not actionable for a while first where it just demonstrates that it is working, that it is effective before people would start to trust it.”—Manager 1. And that it may not be the AI itself but the data that has flaws: “I think the risks would be if somebody inputted something wrong and it then prompting you to do something. Like the data at the end is only as good as how you input the data at the beginning.”—HCP 1.
Some participants had experience with non-AI driven decision support tools: “We worked a little bit with some clinical tools that aid in decision-making when we transitioned [to our new] EMR, but I would say it was very user driven and not so much analytic power on the computing system.”—HCP 3. Some participants felt that negative experiences with EMR functionality may be the source of some hesitancy: “The only barrier would be that we've had some bad experiences with EMRs and the functionality. So there might be a little bit of initial skepticism for something new”—HCP 2. Furthermore, participants expressed that some people may resist AI based on their comfort level and ability to use technology:
I think 80% of that combined group would be onboard. 20% do not have the faith, are not mentally advanced with technology. There were a few people that transitioning to a new EMR was like cutting off an arm.—Executive Director 1
However, despite some hesitation, most participants were open to using AI:
I can't see there being a barrier implementing it into our workplace because I think for the most part our providers are almost eager for this type of information. Anything that can help them and help their client.—Data Support 4
Internal and External Influences
Building on the foundation of trust, participants described internal and external influences that could impact the success of implementing and using AI within the Alliance.
Internal Influences
Participants described how AI technology needs to be easy to use: “If you build the brain power, AI power to do something, it has to be really user-friendly at the end of the day.”—HCP 3. It should also make providers' jobs easier:
That's how frontline staff tend to look at data, does it make my life easier? Is it easier to look at? Does it give me the information that I want when I want it? Is it reliable? And if it's not then I'm going to do it an easier way.—HCP 1
And it should not take away time from patients: “Especially as our patients are becoming more and more complicated, we need to ensure that the EMR supports us and does not make us work harder, so we can sink those efforts and energies into the patient.” —HCP 4.
Another observation noted by participants was that AI should be used in combination with clinical judgment and sensitivity to the patients' specific concerns and context:
There's things too with palliative care, end of life or larger concerns where “I'm not going to address the smoking because it's not the biggest concern.” There's nuances to how care happens so you have to be suggestive, not directive.—Manager 1
Some participants were concerned that frontline providers may rely too heavily on AI and forfeit their own judgment and knowledge:
My greatest fear is that you lose your critical thinking because something's going to come up on a screen and tell you what to do.—HCP 5
On the other hand, some participants felt that providers may reject the AI recommendations if they do not align with their own knowledge and judgment:
I think there are a lot of people who think that they know more than the computer or that there's that human element that the computer doesn't have. That gut feeling that the doctor has that, this is what it is. Well, you can't get that with AI.—HCP 6
External Influences
Participants expressed concerns around health data privacy and security:
It's the big conglomerates that want to use this and I know they use it in other areas like social studies. But health data, people feel discomfort, right.—Data Support 2
Some participants expressed concerns around liability and malpractice suits with AI:
I think that's always something that as providers that's bred into you, to be concerned about liability. To me right now I think of the EMR as ammunition for lots of things, like in the past we didn't have that, it was their word against ours, but now everything's trackable.—HCP 7
Other participants felt that AI could actually help address liability concerns: “Hopefully it would let them feel that there's less likelihood of liability issues with something being missed, because I know that is a huge issue with the clinical staff.”—Executive Director 1
Participants also acknowledged financial concerns: “Where is the investment going to come from? People are going to be skeptical about investments into that, when our health care system is already, from the public's perception, really struggling.”—Executive Director 1. Technical barriers, such as compatibility with EMRs, were also acknowledged:
I think the biggest barrier you would have is going to the various EMR providers and getting them to build it into their systems. Or to at least have an interface where they could export their data and then bring it back in within [a quick time].—Data Support 4
Anticipated Impact of AI
Participants described potential positive impacts of using AI within the CHCs, including more efficient and improved patient care:
We've chosen to work in a Community Health Centre, because we're very dedicated to health promotion, illness prevention, screening, quality of care; those things are very dear to most of our providers…Anything that can make our work more efficient and more consistent would be welcome.—HCP 8
Participants also expressed how AI could help providers: “Trying to remember all the guidelines and having to look in so many spots, I think that that creates a lot of mental fatigue [for the provider].”—HCP 8. And that this was particularly relevant for CHCs:
The more I read about AI and the more I see the selling point is that physician burnout piece. And I know it's across every sector but in CHCs too because they're underpaid, they're not getting the increases, they are paid salaries—there's nothing they can do to work harder or to get more money. But kind of framing it in that “Oh this is going to help you. We understand you're suffering.”—HCP 5
These positive impacts of AI were tempered by participants' concerns of how it could influence the patient-provider relationship:
Also, if we rely more and more on technology, it will also cause clients to say, “I don't even talk to my provider anymore.”…It takes time for our clients to share information. And at times you'll see they're withholding a lot of information that's quite important. And then once there is trust they'll say a lot of things. And so, if we want to provide good care from the very beginning, losing [the trusted relationship] is going to delay care that the clients would need.—Manager 2
Discussion
We conducted this study to better understand how staff and providers perceived the potential use of AI in a primary care setting to help care for their patients. This information helps to fill a gap in the literature and will help the Alliance and other similar primary care organizations to design, integrate, and evaluate AI technology in a way that maximizes the chance of success.
Our study found that AI technology will need to be tailored to the unique needs of the CHCs and their clients who experience marginalization. Similarly, researchers focusing on the use of clinical decision support systems within primary care have described the importance of the patient context.28 For instance, someone who is homeless would have very different health considerations and social supports than someone who has regular access to healthy food and shelter.
Participants in our study had a basic understanding of AI but lacked in-depth knowledge. Similarly, studies from the UK also found that primary care providers had a poor understanding of AI15 and that knowledge of how the tool was developed was needed to build trust.28 An editorial on AI and primary care described the importance for users to understand AI algorithms but that this may be difficult given the complexity.8 As a first step to ensuring acceptance of AI in the primary care setting, we recommend that frontline staff and providers receive education to better understand how AI-driven tools work before using them; there are existing courses and resources.29⇓–31
We identified trust as an overarching theme throughout the analysis. A recent stakeholder consultation with providers, patients, researchers, industry partners, and policy makers within Ontario also identified trust with technology as a requirement for successful use of AI in primary care.13 In our study, participants described that a key factor to ensuring trust is having confidence in the accuracy of the AI-driven tools. Other researchers have described the importance of accuracy in AI algorithms within health care, because errors could result in serious consequences.8,32 To help address this concern, there is a recognized practice when implementing new AI-driven tools using health care data to test the accuracy of it first through a “silent” period where the algorithm is run but not acted on.33 Another concern raised by participants was that accuracy depends on the quality of EMR data. Other studies have also described that EMR data may not be accurate enough for the development of AI-driven tools,13,34 which is an important barrier to further explore.
Participants also explained how poor previous experience with EMRs could deter some people from trusting AI. Based on lessons learned from early EMR development, it is essential that AI researchers work closely with providers when developing and implementing AI technology to ensure that it caters to the end user.11 Despite some concerns, most participants in our study would be open to using AI-driven tools. Castagno and Khalifa15 also found that despite low knowledge about AI, National Health Service staff and providers believed that AI could be useful to their practice.
Participants described key influences to accepting and using AI-driven tools are for it to be user-friendly and not detract from patient care, which is consistent with findings from previous literature.13,35 A study on use of EMR tools by primary care providers found that some users may require training, and that this can help to increase familiarity with the tools and improve their use.28 Therefore, we recommend that frontline staff and providers receive appropriate training on the use of new AI-driven tools.
Another key influence that participants described was the need for balance between following AI recommendations and using clinical judgment. Some participants were concerned about providers relying too heavily on the AI-driven tools, whereas others were concerned that providers would disregard AI recommendations if it did not align with their own judgment. Buck et al35 described a related finding, where providers want to have autonomy over the decision to use AI-enabled tools.
Two big external influences the participants described included concerns about data privacy and liability when providers are using AI to inform care practices. Other studies have also highlighted concerns about privacy,8,15,35 including the public's perception and need for assurance about the protection of their health data.36 Well-designed AI technology should not add any risk to privacy beyond what is already the case with an EMR system. In regards to the potential for malpractice for following AI suggestions, it is unclear who would be legally liable.32 A better understanding of the legal implications for use of AI-driven tools in medical decision-making is needed.
There was overwhelming agreement among participants that AI could lead to improved patient care and reduced provider burnout. The main concern among participants was the potential negative impact on the patient-provider relationship, which is consistent with findings from other studies.13,35 AI is not intended to replace health care providers and staff and relationships they have developed with patients16; rather, it should help them perform their jobs more efficiently and accurately.11
Strengths and Limitations
We used maximum variation purposeful sampling to achieve transferability of our study results to all CHCs in Ontario. Although we only interviewed participants from 6 CHCs, they were diverse and reflected the majority of CHCs based on rural and urban locations and populations served. We aimed to include participants in different roles, but some groups were underrepresented including allied health professionals, nurses, and family physicians. We did not provide specific examples for how AI could be used, because part of the interview was for participants to describe potential uses of AI that would be helpful to their role (not presented in this article), and we did not want to influence participants' opinions. Given their lack of knowledge about AI, participants' opinions may have been partly influenced by common misperceptions about AI and may apply to more broad concerns about new technology implementation. As CHCs serve a more disadvantaged population and are organized differently than other primary care models in Ontario, our study results may not be fully transferable outside of CHCs. However, our findings seem to align with previous literature on AI and primary care in Ontario13 and in other regions.14⇓–16,35
Conclusions
This is the first study to provide a comprehensive description of health care staff and providers' views on the use of AI-driven tools in a primary care setting. We identified mostly positive perceptions for use of AI within a primary care setting. Trust is the underlying foundation and will be essential to ensure successful integration and use of AI. Internal and external influences such as ease of use, complementing health care decision-making, and financial and liability factors must also be considered. This information can be used to inform future research on development and integration of AI-driven tools within the Alliance Learning Health Organization and other similar health care models.
Notes
This article was externally peer reviewed.
This is the Ahead of Print version of the article.
Funding: This project was funded by a Canadian Institutes of Health Research Planning and Dissemination grant. DMN's training is supported by a Canadian Institutes of Health Research Post-Doctoral Fellowship.
Conflict of interest: JR is an employee of the Alliance for Healthier Communities. No other authors have any competing interests.
To see this article online, please go to: http://jabfm.org/content/00/0/000.full.
- Received for publication May 12, 2022.
- Revision received September 18, 2022.
- Revision received October 14, 2022.
- Accepted for publication October 20, 2022.