Abstract
Purpose: To describe the differential completion rates and cost of sequential methods for a survey of adolescents enrolled in a regional health care delivery organization.
Methods: Four thousand randomly selected enrollees were invited to complete a mailed health survey. Techniques used to boost response included (1) a follow-up mailing, (2) varying the appearance of the survey, (3) reminder calls, and (4) phone calls to obtain parent and child consent and to administer the survey. We evaluated the outcome and costs of these methods.
Results: Seven hundred eighty-three enrollees (20%) completed the first mailed survey and 521 completed the second, increasing the overall response rate to 33%. Completion was significantly higher among respondents who received only the plain survey than those receiving only the color survey (P < .001). Reminder calls boosted response by 8%. Switching to administration of the survey by phone boosted response by 20% to 61%. The cost per completed survey was $29 for the first mailing, $26 after both mailings, $42 for mailings and reminder calls, and $48 for adding phone surveys.
Conclusion: The response to mailings and reminder calls was low and the cost was high, with decreasing yield at each step, although some low-cost techniques were helpful. Results suggest phone surveys may be most effective among similar samples of adolescents.
Response rates in population-based surveys have been declining for several decades, and achieving a response rate high enough for results to be considered valid is challenging given real-world budget constraints.1 Surveying adolescents is especially difficult because of the added logistical challenges and expense of obtaining informed parental consent before contacting the adolescent. However, overcoming the challenge of nonresponse is vital for accurately monitoring the health of this population because adolescence is a crucial time for health providers and parents to help teenagers establish good health behaviors to prevent morbidity and mortality in adulthood.2
Health care delivery organizations, such as health maintenance organizations and insurance providers, have a clear stake in learning more about how to teach adolescents to establish good health behaviors; success means healthier enrollees and potential cost savings from the reduction of the number of enrollees who develop chronic conditions that may have been prevented. To do this, health care organizations must have the ability to systematically collect data and disseminate findings that can be incorporated into the clinical practice of their providers. However, implementing effective survey methodology is difficult and costly. Often postal and electronic questionnaires are the only financially viable options for collecting data from large geographically disperse populations, but low response rates undermine the validity of the results.3 Telephone interviewing technology has advanced survey research, but increased use of both cell phones and caller identification in recent years has made reaching families more difficult; this results in telephone surveys becoming increasingly more expensive.4
To date, school-based surveys are the main method used by health researchers to measure the health of the US adolescent population.5–7 Data from national school-based surveys are routinely used by federal, state, and local agencies as well as nongovernmental organizations to support school health curricula, new legislation, and funding for new health promotion initiatives.2 School-based surveys offer particular advantages: classrooms offer a “captive” population of adolescents and parents may not be required to give consent before the survey (ie, only passive consent or the ability to opt out may be required, depending on school policy).8 Although the importance of these school-based health behavior surveys cannot be underestimated, there are important barriers to obtaining responses. Most importantly, school-based surveys only apply to youth who attend school, but youth who are either not attending school or are frequently absent may be more likely to engage in high-risk behavior.9,10 Several large, national surveys systematically exclude schools with low response rates, which can lead to underestimates of certain high-risk behaviors.11
Health care delivery organizations are in a unique position to overcome some of the caveats of school-based survey methodology because they have access to adolescent enrollees not attending public schools and contact information (such as cell phone numbers) that is not publicly available. Also, because parental consent is usually required by law for survey researchers operating outside schools, health care delivery organizations may be in a better position to obtain this consent. Parents are more likely to consent to their child to participating in a survey conducted by an organization they trust than by an entity unknown to them.12
Despite these potential advantages, survey methodology used by health care organizations to capture information about adolescent populations has rarely been described in the literature. There is an absence of published research findings from surveys conducted among large, community-based samples of adolescents outside the school setting. The purpose of this article is to fill a gap in the publicly available literature and to describe the differential completion rates and costs associated with sequential mail and telephone methods used to survey adolescents who were enrolled in a regional health care delivery organization in the Pacific Northwest.
Methods
Study Participants
Group Health is a nonprofit health care system that provides comprehensive health care to more than 600,000 residents of Washington state and Northern Idaho.13 Research study participants were recruited by staff at the Group Health Research Institute for the Adolescent Health (ASC) Study. The purpose of the ASC study was to examine the clinical and demographic predictors of depression persistence in adolescents, and the sample size was chosen to evaluate the performance of a 2-stage depression screening procedure.14 In stage 1, Group Health staff invited 4000 randomly selected, English-speaking enrollees aged 13 to 17 years who had seen a Group Health provider at least once during the last year to participate in a brief survey between September 2007 and June 2008. In stage 2, a subset of the adolescents who had participated in stage 1 were invited to participate in a follow-up phone interview study, during which more in-depth information about depressive symptoms, functional impairment, and health behaviors was obtained. The focus of this analysis is on stage 1 methods (described below).
The parents/guardians of all selected enrollees were mailed 2 copies of a consent form, a survey for their child, and an invitation letter appealing to parents to participate and ask their child to participate in a survey about “teen health” to help researchers understand “the needs of our teen patients.” If parents agreed to their child's participation, they were instructed to sign one copy of the consent form then give that consent form and the study survey to their child to complete in a private place. The child received a $2 incentive before they completed the survey (it was included with the first mailed survey) and a postage-paid envelope for returning the completed survey to Group Health with a copy of the consent form signed by their parent/guardian. Completion of the survey was taken as a form of assent by the child and a phone number for questions was included on all study materials. Additional attempts were made by mail and phone to obtain consent from a parent/guardian for surveys that were returned without consent forms. All study procedures were approved by the Group Health institutional review board. Participants and their parents/guardians were required to provide either written or verbal consent for participation.
Survey Instrument
The ASC survey consisted of 10 items about age, sex, weight, height, sedentary behaviors, functional impairment, and depressive symptoms. Activity-related items included questions about the hours and minutes spent using a computer, watching television, and “exercising or participating in an activity that makes you sweat and breathe hard.” Functional impairment was assessed using 3 items from the Columbia Impairment Scale,15 a psychosocial functional impairment scale widely used in youth anxiety and depression studies. Depressive symptoms were assessed using the 2-item Patient Health Questionnaire (PHQ-2) Depression Scale,16 which at the time of this study had not yet been validated for use in the screening of adolescent populations for depression. A small pilot study (n = 100) was conducted a few months before this study launch to inform survey design (ie, layout and formatting decisions).
Procedures Used to Boost Response Rate
Additional attempts were made by mail and phone to boost the initial response rate to the mailed survey. First, nonresponders received a second survey approximately 2 weeks after the first. Next, a portion of nonresponders received a reminder call to complete the survey (defined as actual phone contact with a parent or guardian), and an additional mailed survey was sent on request.
Two variations of the appearance of the survey were used in these mailings: a color tri-fold survey and a plain black and white single sheet. The first portion of participants received a color tri-fold version of the survey, but response was lower than expected based on results from the pilot study, so methods were changed to implement the version used in the pilot study—a black and white single sheet.
Last, study methods were changed to obtain consent and survey responses via phone from the remaining youth who had not yet received a reminder call and a portion of youth whose parents stated intent to participate during reminder calls but did not. Interviewers were required to obtain verbal consent from the parents/guardians and assent from the youth before administering the phone survey to the youth. Telephone interviewers used computer-assisted telephone interviewing software and were required to record the outcome of all contact attempts.
Analysis of Differential Survey Completion and Cost
Available demographic variables for those who refused to take the survey and nonresponders included age and sex. Multivariable logistic regression was used to examine the association between age, sex, timing of the survey invitation, and survey completion rate (the independent variables were identified a priori). Age was analyzed as a categorical and continuous variable (data not shown). Adjusted analyses for mail survey completion include the independent variables of interest as well as the type of survey sent and whether or not a reminder call was attempted. Both adjusted and unadjusted analyses are presented.
Because all of the participants initially were invited to participate by mail, the cost of phone surveying versus mail surveying could not be directly compared. Cost figures included all materials used in the mailings (ie, letterhead, envelopes, reply envelopes, questionnaires); postage (outgoing and incoming); $2 incentive for the first mailing; and time spent by personnel (including time to process mailings, time for data entry from surveys returned by mail, time for programming the computer-assisted telephone interviewing instrument for administration of the telephone interviews, and hours spent by interviewers on reminder calls and phone survey administration). Cost estimates did not include costs for the space leased by the survey personnel or any administrative costs associated with conducting a research study, such as review by an institutional review board. The cost of survey administration is presented by calculating how the cost per completed survey changed by adding each incremental survey method (second mailing, reminder calls, and survey calls).
Results
Survey Response
Of the 4000 Group Health teens who had been mailed the first survey, 3114 (78%) did not reply, 783 (20%) completed the survey, 75 (2%) actively refused study participation (by mail or phone), and 28 (<1%) were ineligible (were no longer a Group Health enrollee, were cognitively or physically not able to participate, or the survey was undeliverable) (Figure 1). Of the 3114 participants to whom the second survey was mailed, 2451 (79%) did not reply, 521 (17%) completed the survey, 131 (4%) refused to participate, and 11 (<1%) were ineligible. Less than 1% of completed surveys were returned without the consent form and additional mail and phone attempts to obtain consent were successful in all but 4 cases (when a parent or guardian could not be reached; see Figure 1 legend).
Survey formatting was varied among participants: 28% received a color tri-fold for both mailings, 35% received the tri-fold in the first mailing and the plain black and white single sheet as the second mailing, and 38% received the plain black and white sheet for both mailings. The completion rate was significantly higher for respondents who received only the plain black and white survey compared with those who received the color tri-fold for both mailings (P < .001; see Table 1).
Of the 1600 participants who had received a reminder telephone call to remind them to return the written survey (65% of the 2451 remaining nonresponders), 1094 (68%) agreed to return the completed survey but only 288 (18%) actually did. Of the 1341 nonresponders who were asked to complete the survey by phone after the survey methods were changed, 699 (52%) agreed and 509 (32%) refused.
Correlates of Mail Survey Versus Phone Survey Completion
After adjusting for confounding there were no significant differences in age; the timing of initial contact (September, October, or November); or sex for youth who completed the mail survey as compared with those who either refused or did not respond (see Table 2). In contrast, youth who completed the phone survey were significantly older (P = .048) and more likely to complete the survey when contacted in April versus January (P = .009) than those who either refused or did not respond (see Table 3).
Among the children whose parents/guardians received a reminder call to ask their child to complete the survey, there were no differences in the child's age or sex between those in this group who returned a completed survey and those who said they would but did not (data not shown).
Cost Per Completed Survey
Cost of the initial mailing resulting in a 20% response rate was $29 per response (Figure 2); adding the follow-up mailing to nonresponders decreased the overall cost per response to $26 dollars. Adding the reminder calls resulted in an 8% increase in response rate but a higher price of $42 per response. Altering the recruitment methods to complete the survey by phone increased the cost to $48 per response and resulted in boosting the overall response rate to 61%.
Discussion
Multiple survey methods were used with varying success to achieve a 61% response rate from adolescents who were enrolled in a regional health care delivery organization. The initial mailing resulted in only a 20% response rate at the price of $29 per response. Subsequent mailings and reminder calls resulted in a modest increase in the response rate, but the most success came after the survey methods were substantially changed to conduct the consent process and survey with remaining nonresponders by phone.
The results of this study suggest that some low-cost techniques may be useful for boosting survey response rates. Although our follow-up mailing resulted in a modest additional response rate (increasing the total response to 31%), this increase was enough to lower the overall cost of survey administration to $26 per response. Varying the appearance of the survey was another relatively successful low-cost technique; the plain black and white single sheet of paper was more successful than the color tri-fold version that was initially mailed to the adolescents. To date, few experimental studies of the effect of colored questionnaires have been published, and a recent meta-analysis showed that the results of these studies have been mixed.17 In this study, color and survey format were altered at the same time so they cannot be examined separately, but the plain formatting was significantly more successful. The reason for this is unknown, but it is possible that the plain formatting made the survey more apparent whereas the tri-fold may have been mistaken for a brochure and discarded. The same cover letter was sent with both versions of the survey and included a broad appeal for parents to ask their children to participate in a study to help understand “the health needs of our teen patients”; it is possible that a different appeal may have increased response. However, the appeal was designed to be consistent with a previous finding that letters that clearly indicate that the purpose of a survey is to serve the interest of a valued group increase survey participation.18
The reminder calls conducted among a subset of the nonresponders were particularly unsuccessful in this study, although they were a good illustration of what people say they will do is often much different from what they will actually do. Although 68% of enrollees who received a reminder call agreed to return the written survey, only 18% actually did so; this resulted in a small overall increase in the response rate (to 41%) but at a higher price per response ($42).
Conducting the phone survey with a subset of nonresponders resulted in the highest response rate (52%). The cost per response of the phone surveys was also the highest at $48 per response, but this includes all the previous contact attempts by phone so it is likely that this price would be much lower without the initial mail and reminder calls attempts. The phone surveys were conducted among the most difficult-to-reach population in this study (those who had not responded to any of the previous contact attempts), so the response rate would probably have been even higher among a randomly selected population of adolescents. Most importantly, the phone surveys boosted the overall response by 20%, resulting in a final response rate of 61%, which substantially improved the validity of the overall study results.
This study was not designed to directly compare the outcome of different survey administration techniques, and the sequential nature of the methods used prevents direct cost comparisons. Rather, the purpose of this article was to describe a real-world example of a study using multiple sequential methods to increase response to a threshold high enough for survey results to be valid.
These analyses do not directly address the effects of mixing survey modes (ie, the data from the self-administered written surveys and interviewer administered telephone surveys), although variables possibly associated with differential survey completion were analyzed. In this study there were no differences in the age group, sex, or contact month of those who completed the survey by mail; however, those who completed a phone survey were slightly older and more likely to complete if they were contacted in April versus January. There is a lack of empirical studies among adolescents about the effects of mixing survey modes on self-reported health, but recent studies show the effect of both mode19 and survey setting (home vs school)20 can affect the results of health surveys that are conducted among adolescent populations. Experienced survey researchers also understand that mode mixing may often be unavoidable in today's data collection environment and that potential biases that may be introduced by mixing survey modes must be overcome through better understanding of survey design.21
Conclusion
This study adds to the existing literature about survey methodology and can help future health researchers use cost-effective solutions to improve survey response. High response rates are one of several goals for health researchers, and this must be balanced with goals of maximizing data quality and generalizability. Previous studies have shown that sending advance mailings before conducting phone surveys18,22 and token cash incentives for adolescents23 are cost-effective solutions to increasing participation in telephone surveys. Additional research designed to directly compare the cost of mailed versus phone survey methodology among adolescent populations could be a helpful addition to the existing literature; however, in our experience, mailed surveys alone will not generate a high enough response to reach the minimum threshold (generally 60%) for results to be accepted for publication, which limits the ability of researchers to disseminate results from surveys with low response rates. The results from this study suggest that phone surveys are more effective than mailed surveys for health care delivery organizations to achieve a high enough response rate to gain valid results from a community sample of adolescents.
Notes
This article was externally peer reviewed.
Funding: Supported by the following grants: NIH/NIMH grant no. 1 K23MH069814, University of Washington Royalty Research Fund no. 3862, the Children's Hospital and Regional Medical Center Steering Committee Award no. 24536-UW01, and the Group Health Community Foundation.
Conflict of interest: none declared.
- Received for publication January 30, 2010.
- Revision received April 30, 2010.
- Accepted for publication May 6, 2010.