«Respondent Der Einsatz von Incentives in a Befragten-Incentives National Face-to- in einer bundesweiten Face Survey face-to-face-Umfrage Effects on ...»
methoden, daten, analysen · Jg. 7(1), S. 89-122 DOI: 10.12758/mda.2013.004
Respondent Der Einsatz von
Incentives in a Befragten-Incentives
National Face-to- in einer bundesweiten
Face Survey face-to-face-Umfrage
Effects on Outcome Rates, Effekte auf Ausschöpfung,
Sample Composition and StichprobenzusammenFieldwork Efforts setzung und Feldarbeitsaufwand Michael Blohm und Achim Koch
Zusammenfassung Nonresponse is an ongoing challenge for Nonresponse ist eine Herausforderung für survey research. In the German General die gesamte Umfrageforschung. In der AllSocial Survey (ALLBUS) 2010, an experiment gemeinen Bevölkerungsumfrage der Soziwas set up to test the effect of respond- alwissenschaften (ALLBUS) wurde im Jahr ent incentives on outcome rates, sam- 2010 ein Experiment durchgeführt, um die ple composition and fieldwork efforts. A Effekte von Befragten-Incentives auf die random subsample of target persons was Ausschöpfungsquote, die Stichprobenzuoffered a monetary incentive of €10 to be sammensetzung und den in der Feldarbeit paid upon completion of the interview. The notwendigen Aufwand zu untersuchen.
other part of the sample acted as a control Einer Zufallsstichprobe der Zielpersonen group receiving no incentive. The incentive wurde für die Teilnahme an der Umfrage used in ALLBUS 2010 led to an increase in ein monetäres Incentive in Höhe von 10 € the response rate, mainly by improving angeboten. Der verbleibende Teil der Zielthe cooperation rate. It did not change the personen fungierte als Kontrollgruppe und sample composition in a major way. Con- erhielt kein Incentive. Das verwendete Incerning fieldwork efforts, a slight reduction centive führte zu einer höheren Kooperatiwas observed: In the incentive condition, a onsrate und einer höheren Ausschöpfung in given number of interviews was achieved der Experimentalgruppe. Die Stichprobenwith a lower number of contact attempts zusammensetzung unterschied sich nicht than in the no incentive condition. wesentlich zwischen Experimental- und © The Author(s) 2013. This is an Open Access article distributed under the terms of the Creative Commons Attribution 3.0 License. Any further distribution of this work must maintain attribution to the a
Declining response rates are a continuing problem for household surveys in many Western countries (Atrostic, Bates, Burt, & Silberstein, 2001; de Leeuw & de Heer, 2002; Dixon & Tucker, 2010; Kohut, Keeter, Doherty, Dimock, & Christian, 2012). As a consequence, low response rates are no longer exceptional in household surveys in Europe. For instance, in the fifth round of the European Social Survey (ESS), which was fielded in 2010, 4 out of the 27 participating countries had a response rate below 50%.2 In the fourth wave of the European Values Study (EVS), fielded in 2008, 11 out of 47 countries/regions had a response rate of 50% or less. 3 In the third European Quality of Life Survey (EQLS), fielded in 2011, 17 out of 27 countries had a maximum response rate of 50%, among which 8 countries with a response rate even below 40% (Eurofound, 2012).
Various attempts have been made to increase response rates or at least to halt downward trends in response rates. These include, for instance, an increased number of contact attempts, the use of advance letters or the provision of incentives to sample persons to encourage survey participation (Groves et al., 2004). The use of respondent incentives in order to increase response has a long tradition in mail surveys. More recently, however, the use of incentives has also become more common in face-to-face surveys (Kulka, Eyerman, & McNeeley, 2005; Singer, 2002). The ESS is a case in point. In this repeated cross-national survey, the number of countries using some kind of a respondent incentive has been continuously increasing over the first five survey rounds. Whereas in the first round, which was fielded in the years 2002/2003, only 7 out of the 16 permanent countries used a respondent incentive, this number rose to 8, 10, 11 and 13 countries, respectively, in rounds 2 to 5.4 1 The authors would like to thank Dorothée Behr, Henning Best, Christof Wolf and the reviewers for providing constructive comments on a former draft of the paper.
2 Information from ESS website: http://ess.nsd.uib.no/ess/round5/deviations.html (03.04.2013).
3 Personal communication from Evelyn Brislinger, GESIS (28.03.2013).
4 Whether or not the use of an incentive actually increased the response rate in the relevant ESS countries (and, if yes, to what extent), unfortunately remains an unanswerable question, since apart from one exception (Phelps, 2008) the effects of introducing Blohm/Koch: Respondent Incentives in a National Face-to-Face Survey 91 Figure 1 Contact, capability, cooperation and response rates in ALLBUS 1994 – 2008 (in %) The German General Social Survey (ALLBUS) has also faced an increase in nonresponse in the past years. The ALLBUS is a biennial face-to-face survey of the adult population, covering a wide range of topics and aiming at charting the long-term trends in attitudes and behaviour in Germany (http://www.gesis.org/en/allbus).
Between 1994 and 2008 the response rate of ALLBUS decreased from 54% to 40%. The main reason for this decline was a drop in the cooperation rate (see Figure 1).5 In contrast, the contact rate remained broadly stable on a high level. Own analyses of data from the ALLBUS contact forms show that the latter result can be attributed to an increase in the number of contact attempts over time.
In this context, ALLBUS 2010 included an experiment that was set up to investigate whether or not the provision of a respondent incentive might help to counteract the downward trend in the response rate. Using a respondent incentive seemed to be a promising measure insofar as an incentive will primarily help Footnote continued from previous page incentives were not measured in the ESS (or at least, the results of relevant experiments were not (yet) published).
5 For a definition of the outcome rates, see Appendix 1. In order to ensure comparability over time, ALLBUS 1998 and 2002 were excluded, since in both years the survey was fielded by a different survey organisation.
92 methoden, daten, analysen · Jg. 7(1), S. 89-122 to reduce refusals (Singer, 2002).6 The experiment in ALLBUS 2010 was designed to help to clarify three issues. First, the experiment should provide an answer to the question whether respondent incentives increase cooperation and response rates in a face-to-face survey like ALLBUS. Second, the experiment should provide evidence on whether respondent incentives affect sample composition. Third, the experiment should give some indication of whether the use of a respondent incentive affects fieldwork efforts and survey costs.7 Thus far, the literature on incentive effects is less comprehensive with regard to face-to-face surveys than with regard to mail surveys (see section 2). There is evidence that incentives help to increase response rates also in face-to-face surveys. However, there is less (and often only mixed) evidence as regards the effects of incentives on sample composition and survey costs. This is unfortunate since in the past decade a shift in the focus of nonresponse research has occurred (Singer, 2006). It is increasingly acknowledged that a concentration on response rates is inadequate; the focus has to be directed towards nonresponse bias. Regarding the use of incentives the crucial question is whether incentives reinforce or counteract tendencies to underrepresent certain subgroups of the population. Respondent incentives can help to decrease nonresponse bias if they disproportionately attract target persons whose response propensity is below average otherwise. 8 Furthermore, the cost implications of various survey design features (like the number of contact attempts or refusal conversion activities) have gained more attention in recent years. Regarding respondent incentives the question is whether the obvious increase in costs associated with their use will (at least partly) be absorbed by a reduction in fieldwork efforts necessary to complete the interviews.
In the ALLBUS 2010 experiment, we wanted to test an incentive appropriate for the use in future survey rounds – given the specific design of the ALLBUS and the available budget. It was decided to investigate the effect of a €10 conditional cash incentive. A random subsample of the ALLBUS 2010 respondents was to receive a conditional monetary incentive of €10, and the rest of respondents was to act as a control group receiving no incentive. In the survey literature (see 6 Up to that point, ALLBUS had used a respondent incentive only once. In 2002, all ALLBUS respondents received a commemorative coin worth €10. At that time, no efforts were made to measure the effect of the incentive experimentally.
7 Note that in the present paper we are not concerned with the potential effects incentives may have on measurement, i.e., whether or not they change the response behaviour of respondents.
8 As far as the ALLBUS is concerned, for instance, an issue is whether an incentive might help to reduce the underrepresentation of lower educated people that had been observed in previous rounds of the survey (Wasmer, Scholz, & Blohm, 2010).
Blohm/Koch: Respondent Incentives in a National Face-to-Face Survey 93 section 2), unconditional (‘prepaid’) incentives are often described as being preferable to conditional (‘promised’) incentives. In survey practice, however, conditional incentives play an important role in face-to-face surveys. In the fifth round of the ESS, for instance, six countries offered a conditional cash incentive or a shopping voucher to respondents, whereas only two countries provided an unconditional cash incentive to all target persons. Likewise, in the EVS 2008, eight countries used a conditional cash incentive and only one country an unconditional cash incentive.