Journal of Clinical Epidemiology 68 (2015) 693e697

Unconditional and conditional incentives differentially improved general practitioners’ participation in an online survey: randomized controlled trial Jane M. Younga,*, Anna O’Hallorana, Claire McAulaya, Marie Pirottab, Kirsty Forsdikeb, Ingrid Staceyc, David Currowc a

Cancer Epidemiology and Services Research (CESR), Sydney School of Public Health, University of Sydney, Level 6 North, Lifehouse (C39Z), NSW 2006, Sydney, Australia b Department of General Practice, University of Melbourne, 200 Berkeley St, Carlton, VIC 3053, Australia c Cancer Institute NSW, Australian Technology Park, Level 9, 8 Central Avenue, Everleigh, NSW 2015, Australia Accepted 12 September 2014; Published online 19 October 2014

Abstract Objectives: To compare the impact of unconditional and conditional financial incentives on response rates among Australian general practitioners invited by mail to participate in an online survey about cancer care and to investigate possible differential response bias between incentive groups. Study Design and Setting: Australian general practitioners were randomly allocated to unconditional incentive (book voucher mailed with letter of invitation), conditional incentive (book voucher mailed on completion of the online survey), or control (no incentive). Nonresponders were asked to complete a small subset of questions from the online survey. Results: Among 3,334 eligible general practitioners, significantly higher response rates were achieved in the unconditional group (167 of 1,101, 15%) compared with the conditional group (118 of 1,111, 11%) (P 5 0.0014), and both were significantly higher than the control group (74 of 1,122, 7%; both P ! 0.001). Although more positive opinions about cancer care were expressed by online responders compared with nonresponders, there was no evidence that the magnitude of difference varied by the incentive group. The incremental cost for each additional 1% increase above the control group response rate was substantially higher for the unconditional incentive group compared with the conditional incentive group. Conclusion: Both unconditional and conditional financial incentives significantly increased response with no evidence of differential response bias. Although unconditional incentives had the largest effect, the conditional approach was more cost-effective. Ó 2015 Elsevier Inc. All rights reserved. Keywords: Randomized controlled trial; Surveys; Health professionals; General practice; Response rates; Participation incentives

1. Introduction Surveys provide the only feasible means to gather information about subjective issues such as participants’ knowledge, attitudes, beliefs, preferences, or experiences from large numbers of people. Achieving a high response

Funding: This study was funded by the Cancer Institute NSW and the Cancer Council Victoria. J.M.Y., A.O’H., and C.M. were supported through a Cancer Institute NSW. Academic Leader in Cancer Epidemiology grant awarded to J.M.Y. Conflicts of interest: D.C. is the Chief Executive of one of the funding institutions but there are no conflicts of interest. The other authors have no conflicts to disclose. * Corresponding author. Tel.: þ61 2 8627 1559; fax: þ60 2 9515 3222. E-mail address: [email protected] (J.M. Young). http://dx.doi.org/10.1016/j.jclinepi.2014.09.013 0895-4356/Ó 2015 Elsevier Inc. All rights reserved.

rate among those invited to participate in a survey reduces the potential for nonresponse bias. Unfortunately, health professionals’ participation in surveys has declined in recent years [1], with response rates of less than 30% now common in surveys of general practitioners (GPs) [2,3]. Online surveys have a number of advantages over traditional postal methods, including reduced administration, printing and postage costs, and the ability to use programming to optimize a respondent’s pathway through the questionnaire [4]. Recruitment of a representative sample can be even more problematic than for a postal survey as a complete list of e-mail addresses for all potential invitees is often not available, limiting sample selection [4]. Furthermore, response rates in online surveys may be lower than that

694

J.M. Young et al. / Journal of Clinical Epidemiology 68 (2015) 693e697

What is new? Key findings  Both unconditional and conditional financial incentives significantly increased response to an online survey compared with a no-incentive control group.  Although more positive opinions about cancer care were expressed by online responders compared with nonresponders, there was no evidence that the magnitude of difference varied by the incentive group. Although the unconditional incentive yielded a statistically significantly higher response rate than the conditional incentive, the conditional incentive was the most cost-effective option. What this adds to what was known?  Unconditional and conditional incentives can increase health professionals’ participation in online surveys.  These results are similar to findings of improved response rates with unconditional and conditional financial incentives in traditional mailed surveys. What is the implication and what should change now?  Although survey response rates improved with the use of financial incentives, the survey response was very low in all groups.  Further research is needed to identify more effective response-aiding strategies, particularly for online surveys.

obtained when the same questionnaire is administered in postal format [2]. Although effective strategies to improve response rates to postal surveys are well documented [5e10], the applicability of these findings to online surveys remains unclear as there has been relatively little research in this context [5]. Financial incentives can almost double postal survey response rates in some settings, with unconditional incentives (those provided regardless of questionnaire completion) consistently outperforming incentives provided conditionally on survey response [5]. However, a further consideration is whether the use of incentives could perhaps compound the issue of nonresponse bias. Survey respondents who receive no incentive participate for altruistic reasons or because they have a particular interest in the topic of the questionnaire. It is possible that an unconditional incentive prompts participation through provoking a sense of obligation or guilt for nonparticipation, whereas a conditional

incentive could engender a sense of reward for the time and effort expended [11]. Thus, it is possible that unconditional and conditional incentives could have a differential effect on subgroups within the sample and could reduce the representativeness of study participants. This randomized trial was undertaken to compare the differential effectiveness of unconditional and conditional financial incentives, compared with a no-incentive control group, to improve response rates among GPs invited to participate in an online survey. The hypothesis was that the response rates in the unconditional incentive group would be higher than those in the conditional or control group. A second aim was to investigate nonresponse bias in the samples achieved in each incentive group.

2. Methods This randomized trial was embedded in the Australian component of the International Cancer Benchmarking Partnership Module 3 survey of GPs [12], and the results of which will be reported separately. The study was approved by the Ethics Review Committee of Sydney Local Health District (RPAH zone) and the University of Melbourne. 2.1. Sample selection and randomization GPs in New South Wales (NSW) and Victoria were randomly selected from a commercial database [13], stratified by state and location (metropolitan or nonmetropolitan), and randomly allocated to one of the three incentive groups using a computer-generated random number list: 1. Unconditional incentive group: a book voucher (face value of A$75 in NSW and A$50 in Victoria) was included with the letter of invitation. 2. Conditional incentive group: GPs in this group were advised that they would receive the book voucher on completion of the questionnaire. 3. No-incentive control group. 2.2. Survey administration Apart from the incentive, survey administration was identical for all three groups. GPs were mailed an advance letter about the study approximately 1 week before the main letter of invitation. Up to three mailed reminder letters were sent to nonresponders at biweekly intervals. The final reminder letter asked GPs to complete the online survey. If they were unable to do so, GPs were asked to fax back a one-page form comprising a small subset of demographic items and to indicate their level of agreement with the following attitudinal statements: ‘‘More timely diagnosis of cancer is important to ensure better outcomes’’ for each of six common cancers and ‘‘I like to wait until I am sure of a diagnosis of cancer before making a referral to a specialist,’’ using five-point

J.M. Young et al. / Journal of Clinical Epidemiology 68 (2015) 693e697

response scales (‘‘strongly disagree’’ to ‘‘strongly agree’’). GPs who returned the faxed form were considered the closest representation of nonresponders to the online survey that was available for analysis. 2.3. Statistical analysis Differences in the final survey response rates were compared between groups using chi-square tests on an intention-to-treat basis. Subgroup analyses for GP demographic characteristics were undertaken using logistic regression that included the incentive group, the subgroup of interest, and an incentive groupesubgroup interaction term. If the interaction term was statistically significant, the differential effect of incentives varied by that subgroup. Distributions of responses to the attitudinal questions were compared between respondents in each incentive group and between respondents and nonresponders within groups. The incremental cost for each 1% increase in the response rate compared with the control group was calculated by state for unconditional and conditional approaches. 2.4. Sample size A sample size of 800 GPs per incentive group (2,400 total and 1,200 per state) was required to detect a difference of 7% in survey response rates, with 80% power and 5% alpha.

3. Results Of the 3,650 listed individuals, 283 (7.7%) were found to be ineligible for the study. There were no major differences between GPs allocated to each incentive group (Table 1). Response rates by incentive group are presented in Table 2. Compared with controls, the response rates were significantly higher for both the unconditional group [difference of 6%; 95% confidence interval (CI): 3%, 8%; P ! 0.0001] and conditional group (2%; 95% CI: 0.3%, 4%; P 5 0.0007). Furthermore, the improvement in response rates between the unconditional and conditional incentive groups was also statistically significant (3% difference; 95% CI: 1%, 6%; P 5 0.0014). There were no statistically significant differences in these effects by GP age, gender, or location (all P-values for interaction terms O0.05). In NSW (A$75 incentive), the incremental cost for every 1% increase in the response rate compared with the control group was A$2,481 for the unconditional incentive and A$735 for the conditional incentive. In Victoria (A$50 incentive), the incremental costs were A$5,264 and A$862, respectively. Nonresponders who completed the one-page form demonstrated more negative attitudes toward cancer care than responders across all three incentive groups (Table 3).

695

Table 1. Characteristics of general practitioners randomized to each incentive group Unconditional

Control

n

%

n

%

n

%

364 737

32 33

378 733

34 33

378 744

34 34

338 501 229

31 46 21

310 509 268

28 46 24

311 529 256

28 47 23

401 693

36 63

430 674

39 61

441 676

39 60

548 553 1,101

50 50

545 566 1,111

49 51

558 564 1,122

50 50

Incentive group State NSW Victoria Age group (yr) !45 45e59 60 Sex Female Male Location Rural Urban Total

Conditional

Abbreviation: NSW, New South Wales. Where data are missing for age and sex, categories do not sum to 100%.

However, the magnitude of difference between responders and nonresponders was similar in all three study groups (Table 3).

4. Discussion Effective response-aiding strategies may increase survey response rates but must do so without introducing response bias for a cost that is feasible within research budgets. This study found that financial incentives, whether provided unconditionally or conditionally, statistically significantly increased participation of GPs compared with controls who received no incentive. Furthermore, there was no evidence of differential response bias between the three groups. Although the unconditional approach achieved the highest response rate and the improvement over the conditional group was statistically significant, the small improvement Table 2. Survey response rates by incentive group, state, and subgroup Unconditional Incentive group Overall State NSW Victoria Age group (yr) !45 45e59 60þ Sex Female Male Location Rural Urban

n

N

%

Conditional n

N

Control %

n

N

%

167 1,101 15 118 1,111 11 74 1,122 7 70 97

364 19 737 13

49 69

378 13 29 733 10 45

378 8 744 6

52 86 26

338 15 501 17 229 11

46 48 19

310 15 22 509 9 40 268 7 12

311 7 529 8 256 5

72 95

401 18 693 14

64 54

430 15 33 674 8 41

441 7 676 6

83 84

548 15 553 15

59 59

545 11 45 566 10 29

558 8 564 5

Abbreviation: NSW, New South Wales.

696

J.M. Young et al. / Journal of Clinical Epidemiology 68 (2015) 693e697

Table 3. Characteristics and survey responses of online survey responders, fax back nonresponders, and true nonresponders, by incentive group Unconditional Incentive group

Conditional

No incentive

Responders Fax back Nonresponders Responders Fax back Nonresponders Responders Fax back Nonresponders

Completed surveys 168 73 118 Total eligible 1,101 933 860 1,111 Response rate (%) 15 8* 11 Characteristics Median age 50 55 51 47.5 % Male 57 64 64 46 % Rural 49 60 49 50 Mean agreement scores: (1 5 strongly disagree to 5 5 strongly agree) More timely diagnosis is important to ensure better outcomes for Breast cancer 4.8 4.6* N/A 4.7 Prostate cancer 3.8 4.0* N/A 3.7 Ovarian cancer 4.7 4.5 N/A 4.7 Colorectal cancer 4.9 4.7 N/A 4.8 Lung cancer 4.6 4.4 N/A 4.4 Melanoma 4.9 4.7* N/A 4.9 I like to wait until I am 2.7 2.2* N/A 2.5 sure of a diagnosis of cancer before making a referral to a specialist

64 993 6* 55.5 63 53

4.4 3.9 4.4 4.5 4.4 4.4* 2.1*

929

74 1,122 7

52 63 49

51 55 61

N/A N/A N/A N/A N/A N/A N/A

4.8 3.7 4.6 4.9 4.3 4.8 2.4

77 1,048 7

971

56.5 61 36

51 61 50

4.6 3.8 4.6 4.7 4.5 4.6 2.3

N/A N/A N/A N/A N/A N/A N/A

Abbreviation: N/A, not available. * P ! 0.05.

in absolute terms may not justify the substantial increase in costs. In terms of the costs to achieve each additional percentage point increase in the response rate compared with the control group, provision of a conditional incentive was more cost-effective than the unconditional approach. These findings were achieved in the context of a survey that required a mailed letter of invitation as e-mail addresses were not available for GPs. Incentivization strategies could perform differently in surveys that are conducted entirely electronically. Furthermore, overall levels of participation were extremely low in this survey. It is possible that if the relative difference in response rates was replicated in other surveys with higher response rates overall, the differential improvement of the unconditional approach may be of practical importance. That the unconditional incentive achieved the highest response rate is consistent with postal surveys of health professionals and other population groups [5,14,15]. However, no previous studies, to our knowledge, have compared the two approaches for recruitment of health professionals for online surveys. In contrast to our findings, one trial of incentives for an online survey in a nonmedical setting reported no improvement in the response rate among those receiving an unconditional financial incentive [16], highlighting that optimal response-aiding strategies may vary in different contexts. There are several possible reasons for the low response rate overall, including perceived lack of relevance of the survey, the length of the questionnaire, and practical difficulties in accessing the materials online [5,17,18]. Previous research has demonstrated significantly lower levels of participation among health practitioners randomly allocated to online administration of a questionnaire compared with

those who were given a pen and paper option [3,19]. In light of the generally low response rates achieved, some have proposed that online surveys should be avoided until effective methods to improve participation have been identified [20,21]. Other studies have found that the size of a monetary incentive influences its effectiveness [14]. The slightly higher response rate achieved in NSW compared with that in Victoria may have been due to the larger incentive used in that state, although other local influences cannot be ruled out as GPs were not randomized to receive vouchers of different face value. Despite their fairly substantial monetary value, neither of the strategies tested in this study improved survey response rate to a level that would give confidence that a representative sample had been achieved. In the present study, a GP’s response to a question determined the next question they were asked in the case scenarios [12]. This required online rather than paper administration as the latter would have required a lengthy document containing a large and confusing number of skips for GPs to work through. Given the finding in a previous study that providing access to both online and paper-based questionnaires simultaneously can increase GPs’ participation [3,14], future studies involving lesscomplex survey questions could further test this approach. The finding that the online survey respondents consistently held more positive opinions about cancer care than the ‘‘nonresponders’’ who sent back only the one-page faxed form provides empirical evidence of response bias. However, there was no evidence that this differed between the incentive groups. Lack of information about true nonresponders is a limitation of this study. In conclusion, unconditional and conditional financial incentives increased GPs’ participation in an online survey,

J.M. Young et al. / Journal of Clinical Epidemiology 68 (2015) 693e697

with no evidence of differential response bias. Given the low overall response rate, the small absolute improvement with the unconditional approach may not justify the costs.

Acknowledgments The authors thank the GPs who participated in the study, Sigmer, UK, for enabling this randomized controlled trial to be tracked in the online survey software, and the International Cancer Benchmarking Partnership for establishing the online survey. References [1] Cook JV, Dickinson HO, Eccles MP. Response rates in postal surveys of healthcare professionals between 1996 and 2005: an observational study. BMC Health Serv Res 2009;9:160. [2] Pirotta M, Kotsirilos V, Brown J, Adams J, Morgan T, Williamson M, et al. Complementary medicine in general practice: a national survey of GP attitudes and knowledge. Aust Fam Physician 2010;39: 946e50. [3] Scott A, Jeon SH, Joyce CM, Humphreys JS, Kalb G, Witt J. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors. BMC Med Res Methodol 2011;11:126. [4] Braithwaite D, Emery J, de Lusignan S, Sutton S. Using the Internet to conduct surveys of health professionals: a valid alternative? Fam Pract 2003;20:545e51. [5] Edwards PJ, Roberts I, Clarke MJ, Diguiseppi C, Wentz R, Kwan I, et al. Methods to increase response to postal and electronic questionnaires. Cochrane Database Syst Rev 2009;MR000008. [6] Asch DA, Jedrziewski MK, Christakis NA. Response rates to mail surveys published in medical journals. J Clin Epidemiol 1997;50: 1129e36. [7] Jepson C, Asch DA, Hershey JC, Ubel PA. In a mailed physician questionnaire length had a threshold effect on response rate. J Clin Epidemiol 2005;58:103e5.

697

[8] Halpern SD, Asch DA. Improving response rates to mailed surveys: what do we learn from randomized controlled trials? Int J Epidemiol 2003;32:637e8. [9] Rashidian A, van der Meulen J, Russell I. Differences in the contents of two randomised surveys of GPs’ prescribing intentions affected response rates. J Clin Epidemiol 2008;61:718e21. [10] Harris IA, Kao OK, Young JM, Solomon MJ, Rae H. Lottery incentives did not improve response rate to a mailed survey: a randomized controlled trial. J Clin Epidemiol 2008;61:609e10. [11] Kanaan RA, Wessely SC, Armstrong D. Differential effects of pre and post-payment on neurologists’ response rates to a postal survey. BMC Neurol 2010;10:100. [12] Rose PW, Hamilton W, Aldersey K, Barisic A, Dawes M, Foot C, et al. Development of a survey instrument to investigate the primary care factors related to differences in cancer diagnosis between international jurisdictions. BMC Fam Pract 2014;15:122. [13] Australian Medical Publishing Company (AMPCo). Available at www.ampcodirect.com.au. Accessed October 13, 2014. [14] Pit SW, Vo T, Pyakurel S. The effectiveness of recruitment strategies on general practitioners’ survey response ratesda systematic review. BMC Med Res Methodol 2014;14:76. [15] Rosoff PM, Werner C, Clipp EC, Guill AB, Bonner M, DemarkWahnefried W. Response rates to a mailed survey targeting childhood cancer survivors: a comparison of conditional versus unconditional incentives. Cancer Epidemiol Biomarkers Prev 2005;14:1330e2. [16] Bosjnjak M, Tuten TL. Prepaid and promised incentives in web surveys: an experiment. Soc Sci Comput Rev 2003;21:208. [17] VanGeest JB, Johnson TP, Welch VL. Methodologies for improving response rates in surveys of physicians: a systematic review. Eval Health Prof 2007;30:303e21. [18] Hardigan PC, Succar CT, Fleisher JM. An analysis of response rate and economic costs between mail and web-based surveys among practicing dentists: a randomized trial. J Community Health 2012; 37:383e94. [19] Aitken C, Power R, Dwyer R. A very low response rate in a survey of medical practitioners. Aust N Z J Public Health 2008;32:288e9. [20] Leece P, Bhandari M, Sprague SI. Internet versus mailed questionnaires: a controlled comparison (2). J Med Internet Res 2004;6:e39. [21] James KM, Ziegenfuss JY, Tilburt JC, Harris AM, Beebe TJ. Getting physicians to respond: the impact of incentive type and timing on physician survey response rates. Health Serv Res 2011;46:232e42.

©2015 Elsevier

Unconditional and conditional incentives differentially improved general practitioners' participation in an online survey: randomized controlled trial.

To compare the impact of unconditional and conditional financial incentives on response rates among Australian general practitioners invited by mail t...
157KB Sizes 2 Downloads 6 Views