Back to Programme

The Effects of Splitting Long Questionnaires in Web Surveys

Evangelia Kartsounidou (Aristotle University of Thessaloniki)
Ioannis Andreadis (Aristotle University of Thessaloniki)

Keywords: Survey research and questionnaire design

Abstract

One of the main drawbacks that web surveys have to face is the low response rate. This paper explores methods to increase the response rate of surveys conducted online. Given that the length of the questionnaire is a significant factor that influences the response process of a survey, mainly due to burden; creating shorter questionnaires could reduce the drop-outs of a survey, achieving higher response rates. Splitting the questionnaire in shorter parts could be a method to create shorter questionnaires, without necessarily excluding any questions of the survey questionnaire, as shortening methods propose. More specifically, we split the initial questionnaire in shorter parts and we send new follow-up invitations to those who have already completed the first part of the survey asking them to respond to a different part of the questionnaire. The main research questions of this paper are whether: i) a new follow-up invitation to those that have already completed the first part of the survey is a reliable approach to reduce the drop-outs, increasing the aggregate response rate of the survey and ii) splitting affects the response quality of the survey.
Using the Greek Candidate Survey of 2015 as a case study we implemented the following experiment. All the units of the target population were split randomly in two different groups. The respondents of the first group (A) received the extended version of the online questionnaire which requires approximately 50 mins for a respondent to complete it. The respondents of the second group (B) received a short part (part 1) of the questionnaire (10 mins); while the rest of the questions were sent later in subsequent successive phase as a separate questionnaire (Part 2) only to those who have completed the first part. Finally, we compare the response rates and the response behaviours of the two surveys (A and B). The indicators used to examine the response quality of the survey are the response time, the length of answers in open ended questions, the item-nonresponse of the survey and the straight-lining in the grid questions.

The result was 302 completed questionnaires (187 in survey A and 115 in survey B) out of 1119 invitations (546 in survey A and 573 in survey B). As it was expected, the shorter questionnaires gave us higher response rates (40.5%). One out of two of the respondents who have completed the first part and received invitation for the rest of the questionnaire have completed it. Although the response rate of the separate parts of survey B is considerably high, in the end survey A appears having more completed questionnaires than the composite survey B. As for the response quality of the surveys, in general, the respondents of survey A spend less time than the respondents of survey B and give shorter answers in the open ended questions of the survey, while there are no significant differences between the two surveys in terms of item-nonresponse and indications of straight-lining.