Back to Programme

Comparison of multi-mode surveys for data stability and quality: discussing the empirical data backed outcomes from survey experiments in India

Yashwant Deshmukh (CVoter Foundation) - India
Gaura Shukla (CVoter Foundation) - India
Abdul Mannan (CVoter Foundation) - India
Neelabh Tyagi (CVoter Foundation) - India

Keywords: Comparison multi-mode data stability quality empirical data survey India


Abstract

In the previous ISSP panels we have discussed the implications of multi-mode survey experiments in India, particularly focusing on the stability of data across different survey modes such as Computer-Assisted Telephone Interviewing (CATI), online surveys, and Face-to-Face (F2F) interviews. In a study comparing these modes, our colleagues have found that response rates and the quality of data vary significantly. For instance, there's evidence suggesting that F2F might lead to higher social desirability bias compared to online or CATI modes due to the presence of not just an interviewer, but the entire clan or villagers particularly in rural backgrounds of developing societies. Needless to say, this can influence respondent behavior. However, online surveys might suffer from lower response rates, especially in demographics with limited internet access or comfort with digital technology, potentially skewing representativeness unless a mixed-mode approach is adopted to capture broader segments of the population. In such societies in transition, the CATI works great but then the duration of the interview becomes a critical element as far as the response is concerned.

Regarding data stability, the studies we have been involved with or cited indicate that while F2F interviews are often considered the gold standard for detailed and nuanced data collection, they come with higher costs and time investment. Online surveys offer cost efficiency and quicker data collection but might not match the depth or accuracy of F2F for complex questions or sensitive topics. CATI has an intermediate position; it's less expensive than F2F but can still provide a personal touch that might enhance response quality over purely online methods. One of the key points from these discussions is the necessity for methodological adjustments when using different modes. For example, ensuring that question wording, response options, and the overall survey experience are as consistent as possible across modes to maintain data comparability. We would like to emphasize the importance of mode comparison to understand how responses might differ, not just in terms of mean scores or univariate distributions but also in how these modes affect the relationships between variables in multivariate models.

So far, specific direct examples or detailed analyses from India on this exact topic are not explicitly available and these points have been discussed as the general discourse and research the industry has associated with or likely to endorse. We would like to fill this gap with empirical evidence of data, that too with the same instrument in different modes across all languages used in our omnibus.