Back to Programme

The “Yeah, Whatever” Phenomenon: Poll Question Complexity and Don’t Know and Mildly Affirmative Responses

Mark Harmon (University of Tennessee)
Robert Muenchen (University of Tennessee)
Barbara Kaye (University of Tennessee)

Keywords: Methodological challenges and improvements, including in the areas of sampling, measurement, survey design and survey response or non-response

Abstract

Background

During the last 20 or so years, in-person, mail, and telephone questionnaires have been joined by online surveys. Thus, it is valuable to check long-term assumptions about how respondents interact with opinion polling. One assumption is that complex questions can confuse respondents and lead to ambivalent responses such as “don’t know,” and “no opinion.” One study that evaluated 896 poll questions found that for two-option replies, such as ‘yes/no,’ question complexity is positively related to the number of “don’t know” replies (Harmon, 2001). The same study also found a positive association between question complexity and affirmative answers.

It is unclear, however, whether question complexity affects response choices on online surveys. Digital polling differs from standard methods in several ways. Whereas mail and telephone respondents typically are selected through random procedures, internet respondents are often self-selecting members of polling panels or are solicited through other purposive sampling techniques (Langer, 2018). Further, online surveys are read on a screen; they could be more difficult to understand than surveys delivered by other methods.

This study extends the earlier work conducted by Harmon (2001) by examining the relationships between question complexity, as measured by readability formulas, and ambivalent, mid-range, and affirmative responses. The study also analyzed whether polling method (online, in-person, telephone) influences the relationship between question complexity and ambivalent response.

Methods

A total of 3,164 poll questions were collected from Roper Center for Public Opinion Research Center iPOLL archive. The complexity of each question text was calculated using the statistical program ‘R,’ which converted question text into readability scores based on six separate tests: Flesch-Kinkaid Grade Level Readability Formula, the Gunning FOG Index, the Coleman Liau Formula, SMOG Index, Automated Readability Index, and Average Grade Level. Regressions then were run comparing ‘don’t know’ replies against question text complexity as measured by the readability formulas.

Results

The most significant finding of this study shows that online questions averaged 23.61% “don’t know” responses compared to 6.76% in personal interviews, and 5.72% in the phone surveys. These differences are significant for both online v. phone (t= 8.25, p<.001), and online v. personal interview (t= 5.80, p<.001).

When readability scores were compared for each method of polling, however, online and in-person surveys did not show a significant relationship between question complexity and the percentage of ‘don’t know” responses, but questions administered by telephone showed an inverse relationship – the more complex a question the fewer the number of “don’t know” responses. This significant inverse relationship was consistent combined on five of six across five of six readability scores for phone surveys.

Poll question complexity overall also does not correlate at statistically significant levels with mid-range answers or affirmative replies though a greater number of answering options correlated with fewer don’t know responses.