«Web Surveys Optimized for Smartphones: Are there Differences Between Computer and Smartphone Users? Ioannis Andreadis Aristotle University of ...»
methods, data, analyses | Vol. 9(2), 2015, pp. 213-228 DOI: 10.12758/mda.2015.012
Web Surveys Optimized for Smartphones:
Are there Differences Between Computer
and Smartphone Users?
Aristotle University of Thessaloniki
This paper shows that computer users and smartphone users taking part in a web survey
optimized for smartphones give responses of almost the same quality. Combining a design
of one question in each page and innovative page navigation methods, we can get high quality data by both computer and smartphone users. The two groups of users are also compared with regard to their precisely measured item response times. The analysis shows that using a smartphone instead of a computer increases about 20% the geometric mean of item response times. The data analyzed in this paper were collected by a smartphonefriendly web survey. All question texts are short and the response buttons are large and easy to use. As a result, there are no significant interactions between smartphone use and either the length of the question or the age of the respondent. Thus, the longer response times among smartphone users should be explored in other causes, such as the likelihood of smartphone users being distracted by their environment.
Keywords: web surveys, mobile surveys, AJAX navigation, data quality, item response times, smartphones © The Author(s) 2015. This is an Open Access article distributed under the terms of the Creative Commons Attribution 3.0 License. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
214 methods, data, analyses | Vol. 9(2), 2015, pp. 213-228 1 Introduction The aim of this paper is to study the differences between computer and smartphone users when they complete web surveys optimized for smartphones. The comparison is done on two dimensions. The first dimension refers to the quality of the responses, e.g. the frequencies of no answers or neutral responses. The second dimension refers to the time the respondents spend to answer the questions, i.e. the item response times.
The most recent studies on the effects of mobile use on data quality report limited differences between mobile and computer respondents. Mavletova (2013), analyzing an experiment in Russia, reports that computer respondents type longer responses on open-ended questions. One the other hand, she finds that mobile and computer users have similar levels of socially undesirable and non-substantive responses. In addition, the two groups do not differ significantly in terms of primacy effects. De Bruijne and Wijnant (2013) after running an experiment with participants randomly assigned to three modes (mobile, computer and a hybrid) have not found any significant differences. Toepoel and Lugtig (2014) have not found differences between mobile and desktop users with regard to item nonresponse, the length of answers to a short open-ended question and the number of responses in a check-all-that-apply question. Finally, Wells, Bailey, and Link (2014) have randomly assigned roughly 1,500 online U.S. panelists and smartphone users to either a mobile application or a computer. They have not found any significant responseorder effects across modes. However, they report that computer respondents provide significantly longer responses than mobile respondents.
Many web survey researchers have reported that the number of people who use mobile devices to take part in web surveys is increasing rapidly. In addition, the time spent on a web survey is crucial for the quality of the collected data. Longer web surveys suffer from larger break-off rates and greater probability of lower quality responses. Therefore, many recent publications deal with the time spent on responding to web surveys while using mobile devices. Both Mavletova (2013) and De Bruijne and Wijnant (2013) report that mobile device users need more time to complete the questionnaire than computer users. Conversely, Toepoel and Lugtig (2014) find that total response times are almost the same across devices.
Direct correspondence to Ioannis Andreadis, Laboratory of Applied Political Research, Department of Political Sciences, Aristotle University Thessaloniki, 46 Egnatia St., Thessaloniki, 54625 Hellas (Greece) E-mail: firstname.lastname@example.org Andreadis: Web Surveys Optimized for Smartphones 215 2 Designing for Smartphone Users Previous studies on measurement effects have found minimal differences between mobile and computer respondents. The most challenging difference concerns the length of open-ended responses. Nowadays, people become more and more experienced in typing texts using the small keys of their smartphones. Nevertheless, it is still much easier to type using a regular full-size keyboard than using a keypad on a mobile device. As a result, we should continue to expect longer responses on open-ended questions by computer users, especially when the response needs more than 3-4 words.
A good web survey design can remove most of the remaining differences. Survey design always plays an important role. According to Stapleton (2013), horizontal orientation of response choices may increase satisficing by smartphone users, i.e. they are more likely to select one of the first response choices. Vertical scrolling seems to be better than horizontal scrolling. In fact, Mavletova and Couper (2014) argue in favor of using a vertical scrolling design and they report that it leads to significantly faster completion times, and fewer technical problems. As they argue, the smaller number of interactions with the server reduces the risk of dropped connections. On the contrary, Wells, Bailey, and Link (2014) argue in favor of minimal vertical scrolling and support the idea of using one question per page, short questions and short sets of response lists.
3 Data The findings presented in this paper are based on the analysis of the paradata collected in May 2014 by the Greek Voting Advice Application (VAA) HelpMeVote VoteMatch Greece (Andreadis, 2013). Voting advice applications are special types of opt-in web surveys that help users find their proximities with the political parties.
1 Couper and Peterson (2015) refer to two kinds of times: between-page (transmission) time and within-page (response) time. Using the AJAX navigation system the former time is zero.
216 methods, data, analyses | Vol. 9(2), 2015, pp. 213-228 These applications can attract thousands or even millions of users during the preelectoral period. HelpMeVote is the Greek partner in the multi-national European project VoteMatch (votematch.eu). The target of this project is to run VAAs for the European Parliament elections.
HelpMeVote follows the best practices used in both web and mobile survey design. It runs both on computers and on smartphones. It automatically scales to any screen size and it supports both touch and mouse events. It displays one question per page and supports AJAX navigation. It uses large font size, short texts and the response options are displayed vertically with large buttons.
The questionnaire includes 31 Likert type questions. Each question is displayed on a separate page. Respondents have six answer choices: there are five buttons to express their level of agreement with a statement and a “No answer” button. When a respondent clicks on a button, the timestamp is recorded in a hidden input field and the user is forwarded to the next page. Besides the 31 main questions, HelpMeVote users are asked to fill-in a form. This form includes questions about their gender, age group, education level, and voting behavior. Finally, HelpMeVote captures the user-agent header field, which enables the detection of the users’ browser and device type (i.e. smartphone, computer, etc). When the respondent submits the survey, everything is stored to a database. Thus, each database record includes the user responses, the timestamps and the device type.
The HelpMeVote/VoteMatch Greece dataset includes about 80,000 completed questionnaires. The largest part of the dataset consists of computer users (80.7%) and smartphone users (13.5%). The rest of the respondents have used other mobile devices (mostly tablets). The focus of this paper is on the comparison between smartphone and computer users when both groups use a smartphone-friendly web survey. Therefore, users of other mobile devices were not included in the analysis.
4 Methods and Variables
4.1 Quality of Responses HelpMeVote does not include any open-ended items. Thus, the hypothesis that computer respondents provide longer responses cannot be tested. One the other hand, computer and smartphone users of HelpMeVote can be compared for other data quality patterns.2 For instance, if smartphone users selected more non-substantive responses (i.e. “Neither agree nor disagree” or “No answer”) than computer users, this would suggest that smartphone users provide data of lower quality. Similarly 2 For a list of mode effects related to data quality see Bethlehem and Biffignandi, 2011, p.245 Andreadis: Web Surveys Optimized for Smartphones 217 smartphone users can be tested for primacy effects (i.e. selecting the first response choice more often) or any other response-order effects.
When a chi-square test is applied on a large sample, it will almost always give a small p-value. Even when there is no practical difference between expected and observed frequencies, the test will reject the hypothesis of independence. In addition, running a separate test for each of the 31 items included in HelpMeVote would result in multiple comparisons and incorrect rejection of the null hypothesis. Thus, it would be more likely to classify nonsignificant differences as significant.
The aforementioned problems are avoided by creating six new variables. The value of each new variable reflects the number of times the respondent has chosen the corresponding response option (“Frequency of Strongly Disagree” to “Frequency of Strongly Agree” and “Frequency of No Answer”). The range of values of these new variables is from 0 to 31. Each of these variables takes the minimum value (0) when the respondent does not select the corresponding answer in any of the 31 questions. Similarly, it takes the maximum value (31) when the respondent selects the same answer for all questions. With these variables it is easy to analyze mode effects between mobile and computer users. For instance, a comparison of the average values of the variable “Frequency of Strongly Disagree” between mobile and computer users will show if there is a different primacy effect between modes.
Similarly, a comparison of the average values of the variables: “Frequency of Neither agree nor disagree” and “Frequency of No Answer” between the two groups will reveal if smartphone users select non-substantive responses more often than computer users.
5 Item Response Times The analysis of item responses times is much more complicated for two reasons.
First, item response times depend on characteristics of both the respondents and the items. As a result, there is a need for a multi-level analysis of the item response times. Second, there is need for data cleaning in order to deal with extremely short or extremely long item response times.