Stakeholder Engagement in a Patient-Reported Outcomes (PRO) Measure Implementation: A Report from the SAFTINet Practice-based Research Network (PBRN) ===================================================================================================================================================== * Bethany M. Kwan * Marion R. Sills * Deborah Graham * Mika K. Hamer * Diane L. Fairclough * K. E. Hammermeister * Alicyn Kaiser * Maria de Jesus Diaz-Perez * Lisa M. Schilling ## Abstract *Purpose:* Patient-reported outcome (PRO) measures offer value for clinicians and researchers, although priorities and value propositions can conflict. PRO implementation in clinical practice may benefit from stakeholder engagement methods to align research and clinical practice stakeholder perspectives. The objective is to demonstrate the use of stakeholder engagement in PRO implementation. *Method:* Engaged stakeholders represented researchers and clinical practice representatives from the SAFTINet practice-based research network (PBRN). A stakeholder engagement process involving iterative analysis, deliberation, and decision making guided implementation of a medication adherence PRO measure (the Medication Adherence Survey [MAS]) for patients with hypertension and/or hyperlipidemia. *Results:* Over 9 months, 40 of 45 practices (89%) implemented the MAS, collecting 3,247 surveys (mean = 72, median = 30, range: 0 - 416). Facilitators included: an electronic health record (EHR) with readily modifiable templates; existing staff, tools and workflows in which the MAS could be integrated (e.g., health risk appraisals, hypertension-specific visits, care coordinators); and engaged leadership and quality improvement teams. *Conclusion:* Stakeholder engagement appeared useful for promoting PRO measure implementation in clinical practice, in a way that met the needs of both researchers and clinical practice stakeholders. Limitations of this approach and opportunities for improving the PRO data collection infrastructure in PBRNs are discussed. * Comparative Effectiveness Research * Hypertension * Medication Adherence * Patient Outcome Assessment * Patient-centered Outcomes Research * Practice-based Research Two key elements of patient-centered outcomes research (PCOR) are (1) stakeholder engagement throughout the research process, and (2) selection and measurement of “outcomes that the population of interest notices and cares about and that inform decision making relevant to the research topic.”1 The implication is that, historically, outcomes measures in clinical research (eg, blood pressure, survival, laboratory test results) do not reflect outcomes that matter to stakeholders in that research (eg, patients, clinicians, researchers).2 Outcomes that matter may reflect more subjective states of patient well-being, experience, or behavior that are not directly observable by another person. Patient-reported outcomes (PROs), defined as “outcome[s] reported directly by patients themselves and not interpreted by an observer,”3 have been widely adopted as one way to assess outcomes that matter to PCOR stakeholders.4 Hence, PCOR protocols often call for the measurement of PROs, and for stakeholders in that research to be engaged in making decisions about which PRO to measure and how to measure it (ie, PRO selection), and how to systematically collect PRO measures from the population of interest (ie, PRO measure implementation).5⇓⇓–8 In practice-based research, PRO measures are often meant to be implemented in the clinical practice setting, administered to patients in the context of their care. Thus, both the researchers and those interfacing with clinical practice (eg, patients, clinicians, staff, operations, and practice and organizational leadership) are stakeholders whose diverse perspectives should be considered in a PRO selection and implementation effort. In addition to increasing relevance and interest, engaging both researchers and clinical practice stakeholders in PRO selection and implementation may reduce barriers to implementation and enhance data quality and value.5,8⇓⇓⇓⇓⇓⇓–15 While there are several frameworks and methods for stakeholder engagement,15 there is currently limited literature on the application of these methods in PRO selection and implementation. The objective of this article is to describe the application of a stakeholder engagement methodology to PRO implementation in clinical practices involved in a practice-based research network (PBRN). The PRO measures implemented as a result of this effort are then intended to be used as outcomes data for our broader PCOR efforts concerning the effects of care in a patient-centered medical home (PCMH) on outcomes for a cohort of patients with hypertension and/or hyperlipidemia. ### PRO measures and Uses PROs are often measured using questionnaires administered directly to patients, either verbally or in a written or electronic survey.16 PRO measures include measures of a patient's health status, ability to function, symptoms, quality of life, or experience of care.17 Information provided by PRO measures can be used to inform individual patient care, population surveillance, adverse event monitoring, quality improvement, performance monitoring, and research.18,19 In clinical practice, uses of PRO measures include monitoring symptoms, disease progress, and responsiveness to treatment; augmenting patient–provider communication and shared decision making; and providing feedback to health care professionals as a performance measure.6,20⇓⇓–23 Systematic reviews of the effects of implementing PRO measures in clinical practice show potential for improved care processes (eg, patient–provider communication, diagnosis), whereas findings related to health outcomes (eg, patient health and well-being, satisfaction) are mixed.6,7,22,24,25 ### PRO Implementation While PRO measures have value for both clinical and research purposes, there are scientific and logistical barriers to efficient and effective implementation and use of the results—a longstanding problem in practice-based data collection efforts.26,27 Beyond common concerns such as data validity and quality (eg, data completeness),28⇓–30 researchers and clinicians may have conflicting priorities that make it difficult to collect PRO measures to serve research and clinical purposes. The most valid measures and rigorous data collection methods from a research perspective may not have sufficient clinical, quality improvement, or operational value; as such, practices may not be able to justify the time required to modify workflows, implement data collection tools, and train personnel. Yet, when individual practices implement PRO measures in ways that best fit their environment and workflow (a priority for clinicians), this introduces practice-level inconsistencies in how data are collected and possible selection bias, creating threats to validity from a research perspective.31 Thus PRO implementation in clinical practice may benefit from stakeholder engagement methods that facilitate alignment of research and clinical practice stakeholder perspectives. ### Stakeholder Engagement Methods Deverka and colleagues15 define stakeholder engagement as “An iterative process of actively soliciting the knowledge, experience, judgment and values of individuals selected to represent a broad range of direct interests in a particular issue, for the dual purposes of: 1) Creating a shared understanding; and 2) Making relevant, transparent and effective decisions.” Early steps in engagement include identifying relevant stakeholders, establishing roles and responsibilities of the various stakeholders, and selecting an engagement strategy.14 This engagement strategy should build trust, garner the commitment of the stakeholders and researchers, and elicit and align diverse perspectives. Stakeholder engagement methods go beyond elicitation of the perspectives of stakeholders—as exemplified by focus group or key informant interview methods—to encompass full participation and collaboration.15,32 Through such collaboration, research and clinical stakeholders seek to select PRO measures that are “actionable, efficient, interpretable, obligatory, and user-friendly,” aspects that are thought to be key features of successful PRO implementation.33 Thus we expected that stakeholder engagement strategies would allow us to select and implement PRO measures in a way that met both research and clinical practice needs. ## Methods ### Setting The Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) is a multistate, safety-net focused PBRN. PBRNs are built on a foundation of stakeholder engagement and are a vital laboratory for real-world research.34,35 SAFTINet is also a distributed data network,36 with locally controlled databases of administrative, clinical, Medicaid claims and enrollment, and PRO data—including the PRO data gathered through the stakeholder engagement process described here—which can be used for a broad range of research, quality improvement, and care delivery purposes.37 At the time of this project, SAFTINet had 4 partnering clinical practice organizations, with 54 participating primary care practices (federally qualified health centers or federally qualified health center “look-alikes”) in Colorado and Tennessee, caring for approximately 260,000 unique patients per year. An estimated 65,000 patients had hypertension and 39,000 patients had hyperlipidemia. Specialties include family medicine, internal medicine, pediatrics, and behavioral health. Practices range in location from urban to rural; all have an electronic health record (EHR). ### Stakeholder Engagement Methods Existing models and frameworks for stakeholder engagement in PCOR include a taxonomy of stakeholder engagement proposed by Concannon and colleagues,8 and a conceptual model for stakeholder engagement from Deverka and colleagues.15 Both suggest that the first step is identifying relevant stakeholders, which may include researchers, clinicians, health care providers (the institutions), and patients, among others. According to the analytic-deliberative model for stakeholder engagement presented by Deverka et al, the stakeholders then undertake a process of gathering and analyzing the evidence (the inputs, which include stakeholder values, experience, and review of the literature), deliberating (the methods for combining evidence), and decision making (the decisions, including topic generation, study designs, and implementation strategies). Although this model was published after our work was completed, it closely mirrors our process and we refer to it here as an organizing framework. ### Identifying Relevant Stakeholders The 7P stakeholder engagement framework highlights 7 stakeholder groups to engage in developing and implementing research protocols: patients, providers, “principal investigators” (ie, researchers), policymakers, product makers, payers, and purchasers.2 Our network's first research protocols were broadly concerned with studying the effects of health care delivery models (eg, the PCMH) on chronic disease control. While all “7Ps” of stakeholder groups are relevant to this research, we chose to focus primarily on providers and researchers for our first attempt at stakeholder engagement as a nascent PBRN (we have since engaged patients, payers, policymakers, product makers, and purchasers in our research). The stakeholder group labeled “providers” is a broad category that includes not just the clinicians themselves but also others who work for clinical practice organizations. This includes nursing staff, medical assistants, quality improvement teams, health information technology staff, and clinical operations and administration personnel. Upon recruitment to SAFTINet, each clinical practice organization partner named an internal site lead (typically someone in a leadership position, such as a clinical director, funded at 0.10 full-time equivalent (FTE) under a subcontract of our infrastructure development grant) and a site coordinator (typically someone with master's- or doctoral-level training in a clinical or public health–related field; funded at 0.20 FTE) to oversee and manage the scope of work required to implement the network infrastructure and research protocols (work much broader than that required to undertake the activities described in this article). The site leads each were asked to identify a person in their organization who could represent the clinical practice stakeholder perspective in developing research protocols, then serve as a champion for implementing the research protocols within their organization (also supported at 0.10 FTE by our grant). The selected “clinical practice representatives” were typically clinicians or other doctoral-trained individuals with interests in health services research or quality improvement; they were often those directly involved in internal PCMH implementation and evaluation. ### Analysis, Deliberation, and Decision Making After identifying relevant stakeholders, the next step in stakeholder engagement is to undertake an iterative process of analysis, deliberation, and decision making.15 We primarily used facilitated group discussions for this process. The partner representatives, along with the site coordinators, several PCOR investigators, the network project manager, and community engagement leads, comprised our “partner engagement community” (PEC), which met twice monthly via Web conference for over 3 years to discuss the developing research protocols, including the process of PRO selection and implementation described in this article (which took place over 10 months of this 3-year period). The community engagement leads, who were jointly affiliated with the American Academy of Family Physicians National Research Network and the DARTNet Institute ([http://dartnet.info](http://dartnet.info)), coordinated and led the PEC meetings. During these meetings, we identified stakeholder needs, priorities, and concerns; reviewed relevant literature or met with experts to ensure our work was guided by the evidence; proposed and debated options; and ultimately reached a consensus. Detailed agendas and PowerPoint slides were used to organize thinking during PEC meetings; follow-up calls and E-mails and meeting summaries highlighting key decisions and next steps were used to communicate plans (especially important when PEC representatives periodically missed the conference calls), and project plans were used to monitor progress. The PEC meetings also helped to foster trust and relationships among the stakeholder representatives, based on mutual respect and understanding. Between PEC meetings the partner representatives would meet with others in their organization who had a vested interest in whatever topic was currently being discussed to vet options and ideas generated during PEC meetings. They then brought these perspectives back to subsequent PEC meetings to inform decision making. For instance, they would meet with quality improvement teams, individual providers in participating practices, or working groups dedicated to certain health conditions (eg, asthma, hypertension). Similarly, the researchers on the PEC would meet with the larger research team to discuss and provide input back to the PEC. We also conducted local partner site pilot testing and iteration before reaching final decisions. While we had an open invitation for partners to bring others from their organization to participate in the PEC discussions, this rarely if ever occurred because of busy schedules. ### Implementation Planning The clinical partner representatives served as liaisons between the PEC and their local sites, and took responsibility for local decision making regarding specific implementation strategies. We provided a structured PRO planning worksheet to guide these local decision-making processes (see Online Appendix). The PRO planning worksheet was designed for this project based on published recommendations from the International Society of Quality of Life (ISOQOL).38 The ISOQOL “User's Guide to Implementing Patient-Reported Outcomes Assessment in Clinical Practice” articulates 9 questions to be answered before implementation.39 We identified 3 additional implementation decisions specific to our objectives, which were necessary because of the scale of the project (54 practices) and the need to integrate the data into the SAFTINet databases, which are Observational Medical Outcomes Partnership common data model version 4 schemas ([http://OHDSI.org](http://OHDSI.org)), using a standardized data format to support interoperability. We addressed these decisions in 3 high-level steps, as shown in Table 1. Our overarching goals were to (1) collect data of sufficient scale and quality that it could be used for rigorous and relevant research, (2) improve care for patients at risk for cardiovascular disease, and (3) implement the data collection in a feasible, minimally disruptive, and sustainable manner. View this table: [Table 1.](http://www.jabfm.org/content/29/1/102/T1) Table 1. Implementation Steps and Decisions for Patient-Reported Outcome Measures #### Implementation Step 1: Determine PRO Content and Select a Measurement Tool PRO data collection goals from the researcher and clinical partner perspectives are listed in Table 2. The specific context for the PRO implementation work pertained to our research on the effects of receiving care in a PCMH on outcomes for patients with increased cardiovascular risk as a result of hypertension and/or hyperlipidemia. The types of outcomes considered by the group to be relevant to this research context included general health status, patient activation, medication adherence, cardiovascular risk perception, readiness for change, and self-efficacy. We ultimately agreed on medication adherence and barriers to medication adherence as the content area. Notably, medication adherence is (1) consistent with existing clinical partner reporting requirements, (2) an important factor in reducing cardiovascular risk,40,41 and (3) potentially influenced by receiving care in a PCMH.42,43 View this table: [Table 2.](http://www.jabfm.org/content/29/1/102/T2) Table 2. Patient-Reported Outcome Measure Data Collection Goals In general, attributes of appropriate instruments for measuring PROs include consistency with the conceptual and measurement model for the construct of interest, reliability, validity, ability to detect change, ease of interpretation of the results, low administrative and respondent burden, and existence of cultural and language adaptations and translations.44 Among the existing brief, validated instruments for medication adherence,45 we selected a 1-item instrument developed by Gehi and colleagues46 because of its simplicity and discriminant and face validity. This tool asks respondents to indicate how often in the last month they had taken their medication as prescribed. We slightly modified the instructions to state “as instructed” rather than “as prescribed,” to include over-the-counter as well as prescription medications. We adapted an existing, validated medication adherence barriers checklist at the request of the clinicians and practice representatives, focusing on barriers that are amenable to intervention and commonly encountered in clinical settings.47 The tool was translated into Spanish by a certified translator. Before full-scale implementation, 3 of the clinical practice representatives asked 1 to 2 clinicians in their practices to pilot-test the tool's readability and clinical utility during real clinical encounters over the course of 2 weeks. The clinical practice representatives reported back to the PEC that the barriers question helped to engage patients in a conversation about barriers to medication adherence, which the clinician pilot-testers considered to be clinically useful. To support evaluation of the tool's utility, we added 2 items to assess the frequency of such conversations. After 2 more weeks of pilot testing, with no further suggestions, we deemed the tool appropriate for full-scale implementation. The final, single-item medication adherence measure and barriers checklist was referred to as the Medication Adherence Survey (MAS). #### Implementation Step 2: Establish Local Implementation Plans Using our PRO planning worksheet, each of the clinical partner organizations established local implementation plans, considering existing workflow, personnel, resources, structures, and processes, and determined the best way in which a new PRO measure could be integrated into their practices (Table 3). Each clinical partner representative separately convened appropriate others in their organizations to discuss and complete the worksheet based on what they expected would work best in their practices. (Note that those “appropriate others” varied by organization, ranging from organization-level quality improvement teams or working groups tasked with quality improvement in cardiovascular risk reduction, to the collective of primary care clinicians within the organization). After this local decision-making process occurred, members of each clinical partner organization reviewed their plan with members of the central research team, the feasibility of the plan was discussed, and revisions based on input from both researchers and clinicians were made. View this table: [Table 3.](http://www.jabfm.org/content/29/1/102/T3) Table 3. Patient-Reported Outcome Measure Implementation Plan Details By Organization To optimize response rates, data quality, and sustainability, we allowed for flexibility across partner organizations. The timelines for implementation varied across organizations, as PRO implementation needed to be in sync with other planned organizational activities (eg, EHR upgrades). We prioritized consistency across partner organizations in terms of (1) patients targeted, (2) frequency of administration, and (3) the format for the structured capture of results (ie, discrete fields for variable labels and values). Table 3 describes the implementation plans made by the 4 participating clinical organizations and the preferences and recommendations from the research team; each section of the table reflects a major section of the PRO planning worksheet. #### Implementation Step 3: Implementation and Evaluation of the PRO measure Each of the clinical partner representatives worked with his or her respective practices to implement, evaluate, and adapt their PRO data collection processes according to plan and using the internal resources at their disposal. The network project manager and the community engagement leads provided high-level project management over the implementation process using project management tools, such as project plans and regular check-in calls with each site coordinator. Implementation was an iterative process, akin to continuous quality improvement, including auditing the implementation processes, following up with providers to ensure consistent data collection, and addressing any barriers and concerns about impact on workflow, patient satisfaction, and quality of care. Partners documented in their planning worksheets any changes over time to their data collection processes so that we could maintain a record of what changes occurred and when. We report on the approach to and results of the planning and implementation process based on detailed notes from conference calls and team meetings, project plans, reports from clinical partners showing the total number of surveys administered at each participating practice, slide presentations, and E-mails from throughout the process. These records were used to ensure we accurately described the process and numbers of surveys collected, and to summarize the lessons learned, which were explicitly discussed during PEC meetings and captured in the minutes from the meetings. Because the PRO was implemented as a clinical quality improvement project, informed consent of patients who completed the PRO was deemed unnecessary by 2 institutional review boards, so long as secondary use of clinical data for research purposes was described in the organizations' privacy policies. Use of the PRO data in our PCOR protocols (results to be reported elsewhere) thus is considered secondary use and will be provided to the research team under a data use agreement. ## Results Stakeholder meetings began in December 2011, and we reached consensus on implementation plans by September 2012. The mutually agreed upon start date for use of the MAS was January 1, 2013. Each partner's local site coordinator was tasked with coordinating execution of the plan, training staff, monitoring progress, and assessing and addressing barriers to the implementation and use of the survey. Site coordinators reported monthly to the project manager the number of surveys collected at each participating practice. Given the focus on adult patients, 45 of the 53 network primary care practices were expected to participate in this PRO implementation (8 were pediatrics-only practices). Note that organization 3 ended up implementing the MAS in all practices and with all ages, including their 3 pediatrics-only practices, since the MAS was expected to be useful for the care of children taking medications for chronic conditions such as attention deficit hyperactivity disorder or asthma. Because this was unique to organization 3 and not relevant to our PCOR protocol, the numbers reported herein do not include surveys administered to those under age 18 years. Initial adoption of the MAS was slow. After 5 months, 22 of 45 practices (49%) had adopted the MAS (ie, collected at least 1 survey). Common barriers to implementation cited by partners included competing organizational priorities and workflow changes. At that time we discussed these challenges as a group and opted to adopt a benchmark. Each practice would attempt to collect at least 25 surveys by the end of the grant in September 2013. This 25-survey benchmark was based on a recommendation from the study biostatistician, who indicated that 25 surveys per practice (the unit of analysis in the multilevel models specified in our PCOR protocol) would provide adequate power for testing our primary hypothesis (ie, that patients receiving care in a PCMH have higher medication adherence than those who do not). By the end of September, 4 months later, 40 of 45 practices (89%) had implemented the MAS and collected a total of 3247 surveys, an average of 72 per practice (median, 30 surveys; range, 0–416 surveys). Among the practices, 58% (26 of 45) met the 25-survey benchmark. The distribution of survey totals across practices and organizations is shown in Table 4. Of note, the 22 practices that had already adopted the MAS at the time the benchmark was imposed outpaced those who had not yet adopted the MAS (mean = 119 surveys; median = 91 surveys). Conversely, the 23 practices that had not yet adopted the MAS approached the 25-survey benchmark and little more by the end of the project (mean = 27 surveys; median = 23 surveys). Thus, a benchmark seems to be helpful for late adopters. View this table: [Table 4.](http://www.jabfm.org/content/29/1/102/T4) Table 4. Participating Organizations and Clinical Practices and Data Collection Results over the 9-Month Study Period ### Lessons Learned At a PEC meeting clinical practice representatives were asked to reflect on barriers and facilitators to implementation of the MAS across their practices. Partners reported that the following factors facilitated implementation of the MAS in clinical practice: the presence of an established EHR with readily modifiable templates (a critical factor present in organizations 1, 3, and 4, but not 2); the availability of care coordinators to facilitate data collection, interpretation, and triage (present in organization 1); audit and feedback reports; established tools and structures and associated workflows in which the MAS could be integrated (eg, health risk appraisals, hypertension-specific visits; present in organizations 3 and 4); and engaged leadership and quality improvement teams. The clinical practice stakeholders reported several challenges during this process. A major challenge was that maintaining momentum can be difficult when there is staff turnover, requiring additional training and reorientation to the project. To the extent that the data collection can be integrated into standard workflows and processes of care in which new staff will be trained, this issue could be mitigated. Organization 2 specifically cited the lack of available incentives to encourage practice staff and providers to administer the MAS as a barrier. Organization 1 cited lack of prior experience with practice-based research as a challenge (all other organizations have been involved with practice-based research for >10 years). Organization 2 cited competing priorities as a major barrier, mainly a system-wide effort to implement a new EHR, which prevented the MAS from being incorporated into EHR templates as was done at the other organizations. In addition, as described in the quality improvement and practice transformation literature,48 engaged leadership and a willing champion within each individual practice (eg, quality improvement leader or office manager) helped to maintain momentum, to demonstrate the value of the data for improving quality of care, and to provide audit and feedback to providers and staff. Practices lacking their own local champion (even with engaged leadership at the organization level) were thought to have struggled the most with implementation of the MAS. This approach to engagement required significant compromise and investment of effort from all parties. The researchers had to tolerate the lack of fidelity to a specific data collection protocol, variability across practices in commitment to the work and its implementation, and a prolonged rollout period. Even with a considerable amount of financial support from our infrastructure grant, the clinical organizations had to commit considerable time and effort to discussion and planning, and then invest internal resources in a full-scale implementation of the PRO. Given that their highest priority was providing care to patients, this investment was at times difficult to justify. We estimate this work required approximately 2000 person-hours for the planning ([5 people × 100 hours each] + ]10 people × 60 hours each]) and coordination of local site testing and implementation (4 organizations × 2–3 people per organization× 80 hours each). Ultimately, the clinical partner stakeholders perceived the MAS as having marginal clinical utility; although the barriers checklist at times drove conversations with patients, the single-item adherence measure was perceived as invalid because most patients reported taking their medications exactly as instructed >90% of the time. Although our stakeholder engagement process was designed to promote sustainable data collection, most SAFTINet practices elected to discontinue use of the MAS at the project's end, primarily because of the desire to minimize the data collection burden to patients and providers. ## Discussion In this article we describe our method of engaging researchers and clinical practice stakeholders in the implementation of PROs relevant to the care of patients with hypertension and/or hyperlipidemia. We undertook this approach with the expectation that aligning research and clinical practice stakeholder perspectives would yield high-quality, complete, clinically useful, and sustainable data collection processes. Our process mirrored the analytic-deliberative model of stakeholder engagement developed by Deverka and colleagues,15 such that stakeholders analyzed inputs (stakeholder values, personal experience, and research evidence), deliberated using facilitated discussions, and decided on implementation strategies using planning worksheets informed by the ISOQOL User Guide.39 Our approach to decision making and implementation can be likened to a cascading hub-and-spokes model (Figure 1). In summary, at the hub (the PEC), the site coordinator and clinical practice representative from each provider organization met with the research and governance personnel for high-level analysis, deliberation, and decision making. These representatives then convened separately with others in their local organizations and individual clinical practices to again analyze, deliberate, and make more specific, context-appropriate decisions locally, which were communicated back to the hub. The clinical practice representatives and site coordinators were then responsible for managing local implementation of decisions, including determining which specific individuals would take on various roles and responsibilities within the organization and within each practice in the organization. ![Figure 1.](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/29/1/102/F1.medium.gif) [Figure 1.](http://www.jabfm.org/content/29/1/102/F1) Figure 1. Cascading hub-and-spokes model of implementation. IT, information technology; PCOR, patient-centered outcomes research; QI, quality improvement. We learned several lessons about the utility of a stakeholder engagement approach to PRO selection and implementation. Using this approach, we successfully selected and implemented PROs that met the needs of both researchers and clinical practice stakeholders, although there was only a loosely defined protocol, and tolerance for flexibility and adaptations across organizations was required. While resource intensive, a notable benefit of this approach was that clinical partners were willing to coordinate adoption of PRO measures across all primary care practices in their organizations. Another benefit of an approach in which PRO measures are implemented within the context of clinical care and quality improvement (ie, not just for research purposes, but rather to support clinical decision making at the patient level) was that the traditional research concept of subject recruitment did not apply. Thus our institutional review boards waived informed consent (comparable to secondary use of electronic health records data), which can be a critical logistic barrier. There are limitations to what stakeholder engagement can achieve, however. We expected that engaging clinical practice stakeholders would enhance the selected PRO measure's clinical utility (informs clinical decision making) and sustainability (use of the PRO measure continues after the study period ends). Ultimately, the clinical utility and sustainability of the MAS was predicated on the perceived performance characteristics of the MAS in practice. Key PRO performance characteristic constructs include psychometric soundness, person-centered, meaningful, amenable to change, and feasible to implement.49 Unfortunately, the clinical practice stakeholders did not perceive the MAS as valid because an unexpectedly large majority of patients reportedly indicated perfect medication adherence. The barriers checklist seemed to have the greatest utility from a clinical practice perspective. The finding that asking patients about barriers served as an entrée to a conversation about medication regimens is consistent with a proposed framework for evaluating the effects of using PRO measures to support chronic illness care.50 The low perceived validity of the MAS may have been due to our slight modifications to the measure by Gehi et al,46 thus invalidating the previously established scale. Alternatively, it may be that a broad measure of general medication adherence (ie, not specific to a particular medication) is unclear or unacceptable to patients. Furthermore, while the measure by Gehi et al had been validated in a research context, it was not, to our knowledge, tested in a real-world clinical practice context. Finally, it may be that medication adherence is not a suitable PRO for patients with increased cardiovascular risk. In the time since we conducted this work, the American Heart Association released a statement indicating that appropriate PRO measures for assessing cardiovascular health may include patient-reported health status with respect to symptoms (burden of angina, dyspnea, depression), functional status, and health-related quality of life—topics that may resonate with patients as meaningful to their quality of life.51 ### Limitations Our experiences with this approach may not generalize beyond this group of partners or beyond this particular clinical context (ie, PROs for a cohort of patients with hypertension/hyperlipidemia); the timeline may have been much shorter and the level of interest at the practice level may have been greater if there was a well-established recommendation for a PRO for this cohort. While we considered researcher and clinical practice stakeholder perspectives in PRO selection and implementation, we did not engage patients beyond asking for their feedback on the readability of the survey during pilot testing. By engaging patients, we might have selected a more valid and reliable measure of adherence, or we might have selected a different PRO altogether. We did not formally evaluate data collection using this stakeholder engagement approach versus another approach (or no stakeholder engagement at all); thus we can make no conclusions regarding the relative effectiveness of this approach. However, the clinical partners would not have considered undertaking this broad PRO implementation effort if the researchers had not approached it from a collaborative perspective. In that respect, we are confident that the adoption of PROs at the organizational level was greater because of this approach. Finally, this description of our collective experience as researchers and clinical practice representatives is not research or a formal evaluation of our process, and no qualitative analysis was conducted. Rather, this is a methods and reflection article, coauthored by both the researcher and clinical stakeholder representatives. ### Implications and Future Research Since the advent of the EHR, there has been a nearly overwhelming increase in the amount of data that providers are expected to collect and document in structured fields and checkboxes in the EHR.52,53 When adding PRO measures to this data collection burden, the value gained by a PRO is weighed in the context of the effort required to collect and interpret the data. Additional PRO data collection will eventually become untenable for both patients and care providers. Potential solutions include targeting data collection to those patients for whom the data are most relevant (thus reducing the total burden to any individual patient); establishing systems that allow for a larger involvement of the care team in making clinically relevant information available to the clinical provider at the point of care; adopting the use of patient portals, kiosks, and smartphone applications to collect information before the visit; and weeding out data collection that does not prove to be useful. Therefore, further work is needed to identify methods and infrastructure that can be useful for rapid adoption of PROs for research purposes. ## Acknowledgments The authors acknowledge the efforts of the SAFTINet Partner Engagement Community and the comparative effectiveness research team, and the work done by all SAFTINet partner organizations, including providers, staff, and patients. The authors thank Elizabeth Staton, technical writer, for her assistance with editing this article. They also thank the subject matter experts they consulted in the process of selecting the PRO content area and measurement tool, including Drs. Michael Ho, Stacie Dougherty, Raymond Estacio, and Tom Maddox. ## Appendix 1 ![](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/29/1/102/F2/graphic-2.medium.gif) [](http://www.jabfm.org/content/29/1/102/F2/graphic-2) ![](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/29/1/102/F2/graphic-3.medium.gif) [](http://www.jabfm.org/content/29/1/102/F2/graphic-3) ![](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/29/1/102/F2/graphic-4.medium.gif) [](http://www.jabfm.org/content/29/1/102/F2/graphic-4) ![](http://www.jabfm.org/https://www.jabfm.org/content/jabfp/29/1/102/F2/graphic-5.medium.gif) [](http://www.jabfm.org/content/29/1/102/F2/graphic-5) ## Notes * This article was externally peer reviewed. * *Funding:* This work was supported by the Agency for Health care Research and Quality grant 1R01HS019908 (Scalable Architecture for Federated Translational Inquiries Network; principal investigator, Lisa M. Schilling). * *Conflict of interest:* none declared. * Received for publication April 26, 2015. * Revision received August 28, 2015. * Accepted for publication September 9, 2015. ## References 1. 1.Hickam D, Totten A, Berg A, Rader K, Goodman S, Newhouse R. The PCORI methodology report. Washington, DC: Patient-centered Outcomes Research Institute; 2013. 2. 2.Concannon TW, Meissner P, Grunbaum JA, et al. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med 2012;27:985–91. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11606-012-2037-1&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22528615&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 3. 3.Calvert M, Blazeby J, Altman DG, Revicki DA, Moher D, Brundage MD. Reporting of patient-reported outcomes in randomized trials: the CONSORT PRO extension. JAMA 2013;309:814–22. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1001/jama.2013.879&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23443445&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000315332200027&link_type=ISI) 4. 4.Reeve BB, Wyrwich KW, Wu AW, et al. ISOQOL recommends minimum standards for patient-reported outcome measures used in patient-centered outcomes and comparative effectiveness research. Qual Life Res 2013;22:1889–905. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-012-0344-y&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23288613&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000327073900001&link_type=ISI) 5. 5.Greenhalgh J. The applications of PROs in clinical practice: what are they, do they work, and why? Qual Life Res 2009;18:115–23. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-008-9430-6&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=19105048&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000263025600013&link_type=ISI) 6. 6.Greenhalgh J, Meadows K. The effectiveness of the use of patient-based measures of health in routine practice in improving the process and outcomes of patient care: a literature review. J Eval Clin Pract 1999;5:401–16. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1046/j.1365-2753.1999.00209.x&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=10579704&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000083683400005&link_type=ISI) 7. 7.Valderas JM, Kotzeva A, Espallargues M, et al. The impact of measuring patient-reported outcomes in clinical practice: a systematic review of the literature. Qual Life Res 2008;17:179–93. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-007-9295-0&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=18175207&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000253130000001&link_type=ISI) 8. 8.Concannon TW, Meissner P, Grunbaum JA, et al. A new taxonomy for stakeholder engagement in patient-centered outcomes research. J Gen Intern Med 2012;27:985–91. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11606-012-2037-1&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22528615&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 9. 9.Marshall S, Haywood K, Fitzpatrick R. Impact of patient-reported outcome measures on routine practice: a structured review. J Eval Clin Pract 2006;12:559–68. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1111/j.1365-2753.2006.00650.x&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=16987118&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000240605700010&link_type=ISI) 10. 10.Rose M, Bezjak A. Logistics of collecting patient-reported outcomes (PROs) in clinical practice: an overview and practical examples. Qual Life Res 2009;18:125–36. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-008-9436-0&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=19152119&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000263025600014&link_type=ISI) 11. 11.Donaldson MS. Taking PROs and patient-centered care seriously: incremental and disruptive ideas for incorporating PROs in oncology practice. Qual Life Res 2008;17:1323–30. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-008-9414-6&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=18991021&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 12. 12.Fung CH, Hays RD. Prospects and challenges in using patient-reported outcomes in clinical practice. Qual Life Res 2008;17:1297–302. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-008-9379-5&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=18709564&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000261344400011&link_type=ISI) 13. 13.Lohr KN, Zebrack BJ. Using patient-reported outcomes in clinical practice: challenges and opportunities. Qual Life Res 2009;18:99–107. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-008-9413-7&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=19034690&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000263025600011&link_type=ISI) 14. 14.Concannon T, Fuster M, Saunders T, et al. A systematic review of stakeholder engagement in comparative effectiveness and patient-centered outcomes research. J Gen Intern Med 2014;29;1692–701. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=24893581&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 15. 15.Deverka PA, Lavallee DC, Desai PJ, et al. Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Comp Eff Res 2012;1:181–94. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.2217/cer.12.7&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22707880&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 16. 16.Cella D, Riley W, Stone A, et al. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–8. J Clin Epidemiol 2010;63:1179–94. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.jclinepi.2010.04.011&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=20685078&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000282861600004&link_type=ISI) 17. 17.U.S. Department of Health and Human Services FDA Center for Drug Evaluation and Research; U.S. Department of Health and Human Services FDA Center for Biologics Evaluation and Research; U.S. Department of Health and Human Services FDA Center for Devices and Radiological Health. Guidance for industry: patient-reported outcome measures: use in medical product development to support labeling claims: draft guidance. Health Qual Life Outcomes 2006;4:79. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/1477-7525-4-79&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=17034633&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 18. 18.Snyder CF, Jensen RE, Segal JB, Wu AW. Patient-reported outcomes (PROs): putting the patient perspective in patient-centered outcomes research. Med Care 2013;51(8 Suppl 3):S73–9. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1097/MLR.0b013e31829b1d84&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23774513&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 19. 19.Basch E. New frontiers in patient-reported outcomes: adverse event reporting, comparative effectiveness, and quality assessment. Annu Rev Med 2014;65:307–17. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1146/annurev-med-010713-141500&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=24274179&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 20. 20.Boyce MB, Browne JP. Does providing feedback on patient-reported outcomes to healthcare professionals result in better outcomes for patients? A systematic review. Qual Life Res 2013;22:2265–78. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-013-0390-0&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23504544&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000328208300002&link_type=ISI) 21. 21.Rothwell PM, McDowell Z, Wong CK, Dorman PJ. Doctors and patients don't agree: cross sectional study of patients' and doctors' perceptions and assessments of disability in multiple sclerosis. BMJ 1997;314:1580–3. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MzoiYm1qIjtzOjU6InJlc2lkIjtzOjEzOiIzMTQvNzA5NC8xNTgwIjtzOjQ6ImF0b20iO3M6MjA6Ii9qYWJmcC8yOS8xLzEwMi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 22. 22.Espallargues M, Valderas JM, Alonso J. Provision of feedback on perceived health status to health care professionals: a systematic review of its impact. Med Care 2000;38:175–86. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1097/00005650-200002000-00007&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=10659691&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000085133300007&link_type=ISI) 23. 23.Jagsi R, Chiang A, Polite BN, et al. Qualitative analysis of practicing oncologists' attitudes and experiences regarding collection of patient-reported outcomes. J Oncol Pract 2013;9:e290–7. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6Mzoiam9wIjtzOjU6InJlc2lkIjtzOjg6IjkvNi9lMjkwIjtzOjQ6ImF0b20iO3M6MjA6Ii9qYWJmcC8yOS8xLzEwMi5hdG9tIjt9czo4OiJmcmFnbWVudCI7czowOiIiO30=) 24. 24.Varni JW, Burwinkle TM, Lane MM. Health-related quality of life measurement in pediatric clinical practice: an appraisal and precept for future research and application. Health Qual Life Outcomes 2005;3:34. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/1477-7525-3-34&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=15904527&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 25. 25.Gilbody SM, House AO, Sheldon T. Routine administration of Health Related Quality of Life (HRQoL) and needs assessment instruments to improve psychological outcome–a systematic review. Psychol Med 2002;32:1345–56. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=12455933&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 26. 26.Balasubramanian BA, Cohen DJ, Clark EC, et al. Practice-level approaches for behavioral counseling and patient health behaviors. Am J Prev Med 2008;35(5 Suppl):S407–13. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.amepre.2008.08.004&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=18929988&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 27. 27.Pace WD, Staton EW. Electronic data collection options for practice-based research networks. Ann Fam Med 2005;3(Suppl 1):S21–9. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6MTM6IjMvc3VwcGxfMS9zMjEiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzI5LzEvMTAyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 28. 28.Fernald DH, Froshaug DB, Dickinson LM, et al. Common measures, better outcomes (COMBO): a field test of brief health behavior measures in primary care. Am J Prev Med 2008;35(5 Suppl):S414–22. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.amepre.2008.08.006&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=18929989&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000260396800010&link_type=ISI) 29. 29.Peters M, Crocker H, Jenkinson C, Doll H, Fitzpatrick R. The routine collection of patient-reported outcome measures (PROMs) for long-term conditions in primary care: a cohort survey. BMJ Open 2014;4:e003968. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiYm1qb3BlbiI7czo1OiJyZXNpZCI7czoxMToiNC8yL2UwMDM5NjgiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzI5LzEvMTAyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 30. 30.Weenink JW, Braspenning J, Wensing M. Patient reported outcome measures (PROMs) in primary care: an observational pilot study of seven generic instruments. BMC Fam Pract 2014;15:88. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/1471-2296-15-88&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=24884544&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 31. 31.Brundage M, Blazeby J, Revicki D, et al. Patient-reported outcomes in randomized clinical trials: development of ISOQOL reporting standards. Qual Life Res 2013;22:1161–75. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-012-0252-1&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22987144&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000322735700001&link_type=ISI) 32. 32.Bogart LM, Uyeda K. Community-based participatory research: partnering with communities for effective and sustainable behavioral health interventions. Health Psychol 2009;28:391–3. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1037/a0016387&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=19594261&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 33. 33.Spertus J. Barriers to the use of patient-reported outcomes in clinical care. Circ Cardiovasc Qual Outcomes 2014;7:2–4. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiY2lyY2N2b3EiO3M6NToicmVzaWQiO3M6NToiNy8xLzIiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzI5LzEvMTAyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 34. 34.Hartung DM, Guise JM, Fagnan LJ, Davis MM, Stange KC. Role of practice-based research networks in comparative effectiveness research. J Comp Eff Res 2012;1:45–55. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=23105964&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 35. 35.Mold JW, Peterson KA. Primary care practice-based research networks: working at the interface between research and quality improvement. Ann Fam Med 2005;3(Suppl 1):S12–20. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6ODoiYW5uYWxzZm0iO3M6NToicmVzaWQiO3M6MTM6IjMvc3VwcGxfMS9zMTIiO3M6NDoiYXRvbSI7czoyMDoiL2phYmZwLzI5LzEvMTAyLmF0b20iO31zOjg6ImZyYWdtZW50IjtzOjA6IiI7fQ==) 36. 36.Maro JC, Platt R, Holmes JH, et al. Design of a national distributed health data network. Ann Intern Med 2009;151:341–4. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.7326/0003-4819-151-5-200909010-00139&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=19638403&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000269653100007&link_type=ISI) 37. 37.Schilling LM, Kwan BM, Drolshagen CT, et al. Scalable Architecture for Federated Translational Inquiries Network (SAFTINet) technology infrastructure for a distributed data network. EGEMS (Wash DC) 2013;1:1027. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=25848567&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 38. 38.Snyder CF, Aaronson NK, Choucair AK, et al. Implementing patient-reported outcomes assessment in clinical practice: a review of the options and considerations. Qual Life Res 2012;21:1305–14. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-011-0054-x&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22048932&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000314912500001&link_type=ISI) 39. 39.Aaronson N, Choucair A, Elliott T, et al. User's guide to implementing patient-reported outcomes assessment in clinical practice. November 11, 2011. Milwaukee, WI: International Society for Quality of Life Research. Available from: [http://www.isoqol.org/UserFiles/file/UsersGuide.pdf](http://www.isoqol.org/UserFiles/file/UsersGuide.pdf). Accessed November, 12, 2014. 40. 40.Baroletti S, Dell'Orfano H. Medication adherence in cardiovascular disease. Circulation 2010;121:1455–8. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MTQ6ImNpcmN1bGF0aW9uYWhhIjtzOjU6InJlc2lkIjtzOjExOiIxMjEvMTIvMTQ1NSI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMjkvMS8xMDIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 41. 41.Ho PM, Bryson CL, Rumsfeld JS. Medication adherence: its importance in cardiovascular outcomes. Circulation 2009;119:3028–35. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6MTQ6ImNpcmN1bGF0aW9uYWhhIjtzOjU6InJlc2lkIjtzOjExOiIxMTkvMjMvMzAyOCI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMjkvMS8xMDIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 42. 42.Moczygemba LR, Goode JV, Gatewood SB, et al. Integration of collaborative medication therapy management in a safety net patient-centered medical home. J Am Pharm Assoc (2003) 2011;51:167–72. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1331/JAPhA.2011.10191&link_type=DOI) 43. 43.Epstein RM, Fiscella K, Lesser CS, Stange KC. Why the nation needs a policy push on patient-centered health care. Health Aff (Millwood) 2010;29:1489–95. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6OToiaGVhbHRoYWZmIjtzOjU6InJlc2lkIjtzOjk6IjI5LzgvMTQ4OSI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMjkvMS8xMDIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 44. 44.Aaronson N, Alonso J, Burnam A, et al. Assessing health status and quality-of-life instruments: attributes and review criteria. Qual Life Res 2002;11:193–205. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1023/A:1015291021312&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=12074258&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000175209700001&link_type=ISI) 45. 45.Garfield S, Clifford S, Eliasson L, Barber N, Willson A. Suitability of measures of self-reported medication adherence for routine clinical use: a systematic review. BMC Med Res Methodol 2011;11:149. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1186/1471-2288-11-149&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=22050830&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 46. 46.Gehi A, Ali S, Whooley M. Self-reported medication adherence and cardiovascular events in patients with stable coronary heart disease: The Heart and Soul Study. Circulation 2007;115:E584–4. 47. 47.Gellad WF, Grenard JL, Marcum ZA. A systematic review of barriers to medication adherence in the elderly: looking beyond cost and regimen complexity. Am J Geriatr Pharmacother 2011;9:11–23. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1016/j.amjopharm.2011.02.004&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=21459305&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 48. 48.Studer Q. Making process improvement ‘stick’. Healthc Financ Manage 2014;68:90–4, 96. 49. 49.Patient reported outcomes (PROs) in performance measurement. Washington, DC: National Quality Forum; 2013. Available from: [https://www.qualityforum.org/Publications/2012/12/Patient-Reported\_Outcomes\_in\_Performance\_Measurement.aspx](https://www.qualityforum.org/Publications/2012/12/Patient-Reported\_Outcomes_in_Performance_Measurement.aspx). Accessed November 24, 2015. 50. 50.Santana MJ, Feeny D. Framework to assess the effects of using patient-reported outcome measures in chronic care management. Qual Life Res 2014;23:1505–13. [CrossRef](http://www.jabfm.org/lookup/external-ref?access_num=10.1007/s11136-013-0596-1&link_type=DOI) [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=24318085&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) [Web of Science](http://www.jabfm.org/lookup/external-ref?access_num=000336423200010&link_type=ISI) 51. 51.Rumsfeld JS, Alexander KP, Goff DC Jr., et al. Cardiovascular health: the importance of measuring patient-reported health status: a scientific statement from the American Heart Association. Circulation 2013;127:2233–49. [FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiRlVMTCI7czoxMToiam91cm5hbENvZGUiO3M6MTQ6ImNpcmN1bGF0aW9uYWhhIjtzOjU6InJlc2lkIjtzOjExOiIxMjcvMjIvMjIzMyI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMjkvMS8xMDIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9) 52. 52.Tipirneni R, McMahon LF Jr.. Measuring value in primary care: enhancing quality or checking the box? Health Serv Res 2014;49:1724–8. [PubMed](http://www.jabfm.org/lookup/external-ref?access_num=25406764&link_type=MED&atom=%2Fjabfp%2F29%2F1%2F102.atom) 53. 53.Poissant L, Pereira J, Tamblyn R, Kawasumi Y. The impact of electronic health records on time efficiency of physicians and nurses: a systematic review. J Am Med Inform Assoc 2005;12:505–16. [Abstract/FREE Full Text](http://www.jabfm.org/lookup/ijlink/YTozOntzOjQ6InBhdGgiO3M6MTQ6Ii9sb29rdXAvaWpsaW5rIjtzOjU6InF1ZXJ5IjthOjQ6e3M6ODoibGlua1R5cGUiO3M6NDoiQUJTVCI7czoxMToiam91cm5hbENvZGUiO3M6NzoiamFtaW5mbyI7czo1OiJyZXNpZCI7czo4OiIxMi81LzUwNSI7czo0OiJhdG9tIjtzOjIwOiIvamFiZnAvMjkvMS8xMDIuYXRvbSI7fXM6ODoiZnJhZ21lbnQiO3M6MDoiIjt9)