Back to Programme

Leveraging Propensity Score Adjustment and Calibration Techniques to Reduce Response Bias in Volunteer Web Survey Panels in Six Countries

Ismael Nooraddini (Accenture Federal Services (AFS)) - United States

Keywords: Opt-in web surveys, post-hoc data collection, propensity score adjustment, calibration, data reliability, response bias


Abstract

High-quality survey data is becoming increasingly costly due to declining response rates and rising data collection expenses. In response, opt-in web surveys have gained popularity. While there is substantial literature on the challenges of online surveys, research on addressing post-hoc data challenges in developing countries remains limited. This study explores post-hoc data collection techniques practitioners can use to improve online survey data quality.

In early October 2024, we conducted a cross-sectional online survey with 6,005 participants from Brazil, Egypt, India, Germany, the Philippines, and South Africa, using a volunteer web panel. The survey incorporated 6-7 benchmarks from recent population surveys to assess data reliability.

Following established methodology, we combine propensity score adjustment with calibration to minimize response bias. This involved adjusting design weights with propensity scores to correct for selection bias, using recent barometer surveys as the reference dataset. The adjusted weights were calibrated to match control totals for the target population to address coverage bias. Additionally, we applied weight trimming to correct for unstable weights, particularly in countries with low internet penetration.

We compared online responses to a subset of face-to-face respondents with access to the internet on the benchmarks, assessing the design effect and mean absolute error (MAE) with and without weighted adjustments. Results suggest that propensity score adjustment and calibration, alongside benchmarks, effectively reduce online survey response bias. In cases of unstable weights, weight trimming can help stabilize weights, though at the cost of introducing bias. Finally, careful selection of benchmarks is essential, as some resulted in higher MAE than others.

These techniques offer valuable solutions for practitioners seeking to enhance the reliability of online survey data.