Back to Programme

Polling During Times of National Unrest –Variance in Response Rates and Impact on Validity of Results (a case study from Georgia)

Ani Lordkipanidze (GORBI) - Georgia
Erekle Antadze (GORBI) - Georgia

Keywords: Response Rate, Unrest, Protests, Volatility, Methodology


Abstract

Response rates are widely recognized in the literature as a critical factor influencing the accuracy and validity of study findings. High response rates are generally associated with more reliable results, whereas low response rates are thought to introduce bias, particularly when non-respondents differ significantly from respondents.
This ongoing experiment examines the impact of response rates on study outcomes by utilizing data from multiple completed, ongoing, and upcoming nationwide Computer-Assisted Personal Interviewing (CAPI) surveys conducted during the protracted volatility and unrest in Georgia leading up to, during, and in the aftermath of the 2024 Parliamentary Elections.
These studies were carried out throughout the election year in Georgia, which was characterized by two mass protest movements (in spring and winter; the latter is still ongoing) and an election cycle marked by high levels of polarization (summer and autumn), culminating in a contested election characterized by elevated polarization (autumn). This backdrop led to significant variance in response rates. Comparative analysis of surveys conducted during this time prompted an investigation into potential negative effects on data collected by the GORBI team.
Our analysis was two-fold. First, we compared the surveys directly by examining demographic breakdowns of respondents to determine whether there were any significant differences in response rates among particular demographic groups—based on income, education, and employment status—either at the national level or in the capital city, where protests were primarily concentrated. Surprisingly, no significant differences were found, even among demographic groups in the capital that typically attend protests (specifically, individuals aged 18–29 with higher education levels).
To further investigate, we developed a model to randomly generate multiple smaller surveys, simulating lower response rates, in order to see if significant differences would emerge in study outcomes had there been lower response rates to begin with. This software, written in Python and slated for open-source release as part of the final paper (to ensure easy reproducability of the experiment), enabled us to analyze survey waves independently and examine whether simulated lower response rates could significantly affect not only demographic data but also the primary, substantive questions in the surveys. Thus far, the results have consistently shown no significant differences between outcomes.
This ongoing experiment will conclude once the situation stabilizes in Georgia (and a new equilibrium response rate for CAPI is established). Unless there is a dramatic shift in the findings in the coming months, our research suggests that survey results may be more resilient to variations in response rates than commonly assumed, indicating that rigorous sampling methodologies in probability-based surveys can help mitigate potential biases introduced by lower response rates.